Skip to main content

Full text of "Teboul 2020 A Method For The Analysis Of Handmade Electronic Music As The Basis Of New Works"

See other formats


A METHOD FOR THE ANALYSIS OF HANDMADE 
ELECTRONIC MUSIC AS THE BASIS OF NEW WORKS 


Ezra Jeremie Teboul 

Submitted in Partial Fullfilment of the Requirements 
for the Degree of 

DOCTOR OF PHILOSOPHY 


Approved by: 

Dr. Cnrtis R. Balm, Chair 
Professor Michael Century 
Professor Shawn Lawson 
Dr. Knrt J. Werner 
Dr. Nina C. Young 



Department of Electronic Arts 
Rensselaer Polytechnic Institute 
Troy, New York 


[August 2020] 
Submitted August 2020 



In my last twelve years on this continent I have learned about the lands I have called 
homes and how they were violently, coercively and illegitimately taken from their previous 
carers. These peoples are called the Mohicans, the Lenape, the Abenaki and the 
Massachusetts. To them, I express the solemn gratitude of a traveler. It is my hope that 
reparations, including the possibility of returning sovereignty to the descendants of those 
people, will be enacted as soon as possible. Particularly, I call on all settler institutions to 
acknowledge their responsibility in this process. This is especially true of my hosts, 
Hampshire College, New York University, Dartmouth College, and Rensselaer Polytechnic 
Institute, whose attitude as settlers have ranged from self-denying to destructive. 
Education never warrants such damage or the erasure of such damage. 


11 



CONTENTS 


LIST OF FIGURES . vi 

ACKNOWLEDGEMENTS . viii 

ABSTRACT . xii 

1. Introduction . 1 

1.1 How can we deal with non-discursive, technical objects of study in music? . . 2 

1.2 Technologically dependent music after David Tudor’s Bandoneon ! (1966) . . 5 

1.3 What does critically incorporating reverse engineering into music studies mean? 8 

1.4 A method, case studies, and pieces . 9 

2. State of the Art . 12 

2.1 Initial texts: Pinch and Trocco, Collins . 12 

2.2 Operational references . 16 

2.2.1 Music studies: ethnomusicology, musicology, organology . 17 

2.2.2 Media archaeology . 23 

2.2.3 From circuit theory, to systems theory, to reverse engineering of circuits 

and code . 27 

3. Method . 33 

3.1 Selecting a work . 34 

3.2 Analysis of the system . 35 

3.2.1 Canonical and vernacular circuit topologies: identifying connections . 36 

3.2.2 Extending this analysis to computer code in musical works . 38 

3.2.3 Interfaces . 39 

3.3 Communication with makers . 40 

3.4 Re-interpretation: “covers” . 42 

3.5 Production of documentation . 43 

3.6 Example . 44 

4. Foundation Work . 47 


iii 























5. Case Studies . 54 

5.1 Steve Reich’s 1969 Pulse Music and the “phase shifting pulse gate” . 54 

5.1.1 Materials available . 55 

5.1.2 Analysis of the system . 56 

5.1.3 A “cover” of Pulse Music . 59 

5.1.4 Interface . 62 

5.1.5 Conclusion . 64 

5.2 Pulse Music Variation . 66 

5.3 Paul DeMarinis’ 1973 Pygmy Gamelan . 67 

5.3.1 Materials available . 67 

5.3.2 Analysis of the system . 70 

5.3.3 Covering Pygmy Gamelan . 72 

5.3.4 Mathematical analysis of the active bandpass filter in Pygmy Gamelan 76 

5.3.5 Interface . 85 

5.3.6 Conclusion . 85 


5.4 Ralph Jones’ 1978 Star Networks At The Singing Point . 88 

5.4.1 Materials available . 88 

5.4.2 Analysis of the system . 89 

5.4.3 Interface . 91 

5.4.4 Performing Star Networks at the Singing Point . 91 

5.4.5 Conclusion . 93 

5.5 Tamara Duplantis’ 2015 Downstream . 96 

5.5.1 Materials available . 96 

5.5.2 Analysis of the system . 97 

5.5.3 Interface . 102 

5.5.4 Playing Downstream . 104 

5.5.5 Conclusion . 105 

5.6 The music implicit in 3D printers: Stepper Choir . 106 

5.6.1 Starting materials . 107 

5.6.2 Analysis of the system . 107 

5.6.3 Interface . 113 

5.6.4 Conclusion . 114 

5.7 Sounds with origins: Kabelsalat . 114 

5.7.1 Analysis of the system . 114 

5.7.2 Implementation and Iterations . 115 


IV 




































5.7.3 Interface . 116 

5.7.4 Conclusion . 116 

6. Results . 118 

6.1 Understanding works of handmade electronic music . 118 

6.2 Suggestions for documentation of future work . 120 

6.3 The necessity of mixed-method analysis . 121 

7. Conclusion . 124 

7.1 We can make technical objects of study in music partially discursive with two 

adapted tools: engineering analysis methods and artistic practice . 124 

7.2 The lossy nature of handmade electronic music . 124 

7.3 Electronic music history and knowledge production . 128 

7.4 Future Works . 130 

7.5 Afterword . 130 

BIBLIOGRAPHY . 135 

APPENDICES 

A. Interview with Ralph Jones . 153 

B. Interview with Tamara Duplantis . 171 

C. Patch for Pulse Music and Pulse Music Variation . 182 

D. Patch for Pygmy Gamelan . 185 

E. Code for Tammy Duplantis’ Downstream . 186 

F. Patch for Stepper Choir , Music of the Spheres and Multichannel Motor Music . . 190 

G. Code for matrix inversion and results of LTSpice simulations . 193 


v 





















LIST OF FIGURES 


1 Albrecht Durer’s Melencholia I (1514) . viii 

5.1 A transcribed diagram showing the clock mechanism common to the entire cir¬ 

cuit, one of the twelve gates, and the system’s inputs and outputs. Based on 
Reich and Hillier 2004, 39-40 . 58 

5.2 A re-writing of Reich’s score for Pulse Music as a table . 60 

5.3 The opening chord in Pulse Music(for Phase Shifting Pulse Gate). These pitches 
spread into an arpeggio over the course of the composition, before returning to 

a chord at the end of the piece . 61 

5.4 Paul DeMarinis’ schematic for the Pygmy Gamelan device (1973). Credits: Paul 

DeMarinis. Used with permission . 69 

5.5 A detail of the Pygmy Gamelan schematic focusing on the subassemblies to be 

studied . 78 

5.6 An idealized schematic of one of the filter input subassemblies in Pygmy Game¬ 
lan. Based on a drawing by Kurt Werner . 78 

5.7 An idealized schematic of one of the resonant filter subassemblies in Pygmy 

Gamelan. Based on a drawing by Kurt Werner . 79 

5.8 The plot produced by the code in Appendix F, showing the five closest matches 

produced by the model to the measured resonance peaks in the Forest Booties 
recording . 83 

5.9 The connection diagram provided by Jones for performance of Star Networks in 

2018. Used with permission . 93 

5.10 My setup for performing Star Networks at the Singing Point with Composers 

Inside Electronics, The Kitchen, NYC (March 2018) . 94 


5.11 A logical diagram showing the variables, user inputs and processes included in 
the source code for Downstream and explaining how it generates and processes 
data to produce the visuals and audio. Made with assistance from Shawn Lawson. 99 


5.12 My initial diagrammatic representation of Downstream . 101 

5.13 Tamara Duplantis’ hand-drawn representation of Downstream .102 

5.14 An adaptation of David Tudor’s Rainforest IV expanded to explicitly include 

feedback loops as well as other resonating mediums than mechanical ones. ... 115 


vi 















6.1 An idealized representation of my practice-based research cycle. Not pictured 

are the various concurrent cycles happening simultaneously. . 122 

C.l The Purr Data patch used to perform Pulse Music and Pulse Music Variation 

at the Arete Gallery in March 2019 . 183 

C. 2 The subpatch for oscillator 7. Each of the 12 oscillator subpatches contain a 

similar arrangement, with modified routing . 184 

D. l The Pygmy Gamelan Purr Data patch as discussed in section 5.3 .185 

F.l The MaxMSP top level patch for Stepper Choir , Music of the Spheres and Mul¬ 
tichannel Motor Music . 191 

F. 2 The MaxMSP subpatch which parses the spatialization for the x coordinate info 

as processed by the program. The subpatches for the y and z axes are 
comparable . 192 

G. l The plot produced by LTSpice for the filter in figure 5.7 with R is 20 kD and C 

is 33 nF . 198 

G.2 The plot produced by LTSpice for the filter in figure 5.7 with R is 4.7 kD and C 

is 68 nF . 198 

G.3 The plot produced by LTSpice for the filter in figure 5.7 with R is 13 kD and C 

is 22 nF . 198 

G.4 The plot produced by LTSpice for the filter in figure 5.7 with R is 15kfl and C 

is 15 nF . 199 

G.5 The plot produced by LTSpice for the filter in figure 5.7 with R is 2.2 kD and C 

is 82 nF . 199 


vii 













ACKNOWLEDGEMENTS 


To those who took the time 



Figure 1: Albrecht Durer’s Melencholia I (1514) 


...nothing to lose but their chains. 
(Marx, Engels, and Dean 2017, 103) 

viii 
























Et plus particulierement: 

Papa et Maman, merci de toujours essayer de comprendre ce que j’essaye de faire. Parfois 
c’est difficile a expliquer et encore plus dur a comprendre, mais 28 ans plus tard vous essayez 
encore et ga me fait plaisir. 

Laela et Ruben, merci d’ecouter absolument n’importe quoi qui me passe par la tete. 
Appelez moi plus souvent! Vous ne me derangez jamais. 

Papi et Mamie, desole de vraiment pas aimer la plage et le solcil. Vous etes bien mieux! 

Baba et Grandpa, j’aurais aime pouvoir vous dire ce que je fais ces jours ci. 

Laurent, Celine et Quentin, j’aurais aime vous le dire en personne plus souvent. 

Samuel, Thomas, Fabien et Claudia, j’ai hate de tout vous raconter. 

Marilla Cubberley and Oskar Peacock, for all the late night chats, tacos, food explo¬ 
rations, sharing and honesty. Give Scully an extra big neck scratch for me sometime. 

Becca Hanssens-Reed, for being willing to talk about absolutely anything at absolutely 
anytime, amongst so many other things. Give Pierre an extra minnow for me sometime. 

Mina Beckman, RJ Sakai, Andrew Feinberg, Molly Haynes, Kai Beavers, Kim Parente, 
Sparkles Stanford, Kevin Schwenkler, Trevor Le Blanc, and Darla Stabler, for always acting 
like I just left when I show up. 

Ted Levin, for helping me realize I wanted to do this, with a simple question. Brevity 
is a clear teacher. 

Seth Cluett, for the endless and unconditional encouragements, and contributing in no 
small way to making the place he grew up in feel like home to me. 

Aden Evens, for being the first professor to trust in this project from up close. 

Yasser Elhariry, for reading “The Poetics of Signal Processing” with me and asking the 
question I’m still trying to answer. 

Paula Burgi, for the muscovite, and for meeting me for tea in the middle of the worst 
ice rain I’ve ever seen. 

Tim Taylor, Rosa Delgado, Shannon Werle, Andy Sarrof, Carlos Dominguez, Yuri Spit- 
syn, Victor Shepardson, Sam Nastase, Beau Sievers and Dominic Coles, for the confidence 
you gave me in making a beautiful racket with and/or for you. 

Karl Hohn, for your understanding and patience as I committed time to school instead 
of playing music. Somehow always inviting me back to add to your music means the world 
to me. I can’t wait to make a bunch more records with you! 


IX 



Garret Harkawik, for making me feel like family almost as soon as we met. 

Matthew D. Gantt, for the wild late nights and the gentle pushes in directions 1 forget 
exist. Your encouragements mean so much. 

Tanya St-Pierre, Philippe-Aubert Gauthier, Gaetan Desmarais, Fred Dutertre, Amelie 
Deschamps, et Eric Desmarais pour votre aide et votre amitie depuis mes trois mois a 
Sherbrooke. 

Alex Jenseth, for all the cocktails, patience, and support, you deserve some sort of 
medal. Housemates are the unspoken heros of long writing projects. 

Hannah Johnson, for the funniest birthday present I’ve ever gotten. 

Lee Nelson, Caroline Mason, Bucky Stanton, Mitch Cieminski, Maggie Mang, Jennifer 
Cardinal, Jim Malazita, Abby Kinchy, I’ve learned so much from you and I feel lucky to be 
part of your STS family. 

Van Tran Nguyen, Dan Seel and Morgan Johnson, for being outstanding comrades. 

Audrey Beard, for somehow making every one of our chats remind me of what is worth 
caring about and why. 

Amelia Peterson and James Davis, for showing me that imagining possible worlds is 
sometimes just as good as working on our own. 

Curtis Bahn, for trusting me and my project since our first meeting, for the unending 
patience and support, for all the priceless stories about Paul and Charles and stacks of 
Fortran cards and improvising. Also, in retrospect, for never giving me a deadline, even 
when 1 asked for one; no better way to figure out what one actually cares about, ft was a 
pleasure to be your last advisee. 

Nina Young, Kurt Werner, Shawn Lawson, and Michael Century, for being a compas¬ 
sionately challenging committee from beginning to end. Your support has been incredibly 
appreciated, and it has shaped my attitude as a teacher and researcher. Being shoulder to 
shoulder with friends for a while seems much better than standing on that of giants. 

David Kant, Madison Heying, Aphid, Joe Pfender, Earnonn Bell, Einar Engstrom, 
Charles Eppley, Brian Miller, Stephanie Lovelace, Lex Baghat, Kate Galloway, Luke DuBois, 
Dan Warner, Thomas Patteson, Matt Sargent, Ken Ragsdale, Tomie Hahn, Jefferson Kiel- 
wagen, and Nicholas De Maison, for the endlessly nourishing exchanges over the years. You 
make words feel valuable and the world feel small and kind. 

Libi Rose Striegl, for being so extremely good online, it’ll be nice to actually meet you. 


x 



Nicolas Collins, for your generosity in making me feel like there was really something 
worth talking about here. This document wouldn’t exist without you. 

Don Buchla, Pauline Oliveros, Jean-Claude Risset, for teaching us to listen, to wire up 
what was there, be weird as hell, and for making me appreciate mysteries. 

Ralph Jones, Paul DeMarinis, Ezra Buchla, Tamara Duplantis, Birch Cooper, Philip 
White, Victoria Shen, George Lewis, Suzanne Thorpe, Bonnie Jones, Anastasia Clarke, 
Martin Howse, Charles Dodge, Mike Buffington, Joshua White, James Hoff, Casey Anderson, 
David Dunn, Jessica Rylan, Gordon Mumma, Ragnhild May, Kristoffer Raasted, Jeff Snyder, 
Hannah Perner-Wilson, Raff Baecker, Lia Mice, Joshua Florian, Jason Bernagozzi, Deborah 
Bernagozzi, Dave Jones, Hank Rudolph, Brian Murphy, Asha Tamirisa, Quran Karriem and 
Becca LUiasz, for being so generous with your insights about what, no matter how long I 
spend explaining it, is still magic when it hits my ears/eyes. 

Tara Rodgers, Georgina Born, Shannon Mattern and Alejandra Bronfman, for being 
endlessly and fiercely inspiring in imagining what thinking and talking between sound and 
humans could and should mean. 

Emma McKay, for the caring listening, thoughtful questions, and feedback you offered 
as I finished writing this. Also, just for being you. 

The world is better for all of you. 

Aboltar cazal, aboltar mazal. 


xi 



ABSTRACT 


“Handmade electronic music” is the set of sonic works in which a unique combination of 
technical systems must be used. In other words, handmade electronic music emphasizes 
and explores the sonic potential of electronics, shifting the location of expertise to include 
manufacturing in addition to performance. In this document, I suggest that a generalizable 
mode of study of such works has remained elusive but can be formalized by acknowledging 
prior work in related disciplines: technology studies and engineering. I assess the potential 
of, specifically, reverse engineering, as a qualitative practice-based research method which 
can serve as the basis for such a generalizable mode of study. A critical deconstruction of 
technical objects for comprehensibility, maintenance, and improvement, this approach has 
underexplored potential in the humanities generally, and electronic music discourse specifi¬ 
cally. 

I focus on electronic circuits, computer code, and human-legible representations of these 
technical media. I offer a view of reverse engineering through the lens of technology studies 
as not only a complement to interview and archival-based research, but also as a neces¬ 
sary step in the full documentation of artistic experiments rooted in electronic mediums. I 
present how this new approach to musical analysis in a sociotechnical context enables better- 
informed documentation and reiteration of electronic works by connecting material decisions 
with artistic consequences. 

The generalizable nature of this reading is illustrated through six case studies: four focus¬ 
ing on previous projects by significant practitioners in the held, and two detailing my own 
inquiries. My pieces are the aesthetic response to the scholarly process of analyzing and 
reverse-engineering systems with musical potential. In other words, if reverse engineering 
allows for connections to be made between tools and aesthetic experiences, it can also reveal 
alleys of artistic experimentation with musical technologies left unexplored by past artwork. 
Here, research and practice do not only serve each other, they are co-constructed. 



CHAPTER 1 

Introduction 


In Handmade Electronic Music, I defined the practice as the “coordinated production of 
electronic instruments and electronic sound” which “requires paying attention to the electri¬ 
cal properties of materials, the sonic consequences of these things as their interconnections 
evolve, and our ability to create situations favorable to the exploration of those evolutions.” 
(Teboul 2020a) This project focuses on documenting and understanding this specific type of 
electronic music in which the making of artworks and the making of artefacts are overlap¬ 
ping practices. This focus on the handmade potential of electronics in sound is indebted to 
the work of composer Nicolas Collins and the research of ethnomusicologist Andrew Dewar 
(Collins 2006; Dewar 2009). I focus on handmade electronic music in order to crystallize the 
case for critical uses of reverse engineering, the process of identifying a system’s components 
and their relationships for both “maintenance and new developments,” (Chikofsky and Cross 
1990, 15-16) in arts scholarship. This is done through the study of past works and the devel¬ 
opment of new pieces which aestheticize the insight provided by this critical use of reverse 
engineering. As such the present document occasionally relies on concepts and vocabulary 
from both music and engineering. 

The combination of analysis and practice in scholarly research is sometimes called 
practice-based research. As such, this document adapts a structure discussed by Linda 
Candy in a report on the early history of the term (Candy 2006, 7). “Practice-based re¬ 
search”, she writes, is “is an original investigation undertaken in order to gain new knowledge 
partly by means of practice and the outcomes of that practice.” (1) In broad terms, this 
project uses the study of past art practices to inform the creation of my own, which is based 
on theoretical developments in musical instrument design and poetic applications of signal 
processing theory. 

This introduction frames the problem at hand as an unresolved issue in electronic music 
scholarship which reflects boundary work between the humanities and engineering. Boundary 
work, as discussed by historians of science such as Thomas Gieryn, describes researchers’ 
work in positively presenting their work in contrast with other public or technical endeavors 
(Gieryn 1983, 781). In this particular instance, boundary work separated music studies from 


1 


2 


technical analysis of electronic artifacts. For this study, I document and discuss this historical 
boundary by focusing on the development of experimental music from David Tudor (born 
1926) to today, before summarizing a practice-based research method that will be detailed in 
chapter three. With this critical tool and the case studies that follow, I suggest that reverse 
engineering as used in technical fields has critical potential in arts theory and practice because 
it engages with the electrical medium on the terms of its design, operation, and materiality, 
offering insights that cannot be obtained otherwise. 

1.1 How can we deal with non-discursive, technical objects of study 
in music? 

A significant portion of contemporary music is produced and consumed using electronic 
means. This is in part due to the accessibility and continually increasing accessibility of the 
devices that make our music electronic (Theberge 1997). Theberge writes: 

Electronic technologies and the industries that supply them are not simply the 
technical and economic context within which ‘music’ is made, but rather, they 
are among the very preconditions for contemporary musical culture, thought of 
in its broadest sense, in the latter half of the twentieth century. (Theberge 1993, 

151) 

This dissertation examines the mechanisms that link these preconditions with the musi¬ 
cal consequences of their use by looking at specific artworks that are concerned with the mu¬ 
sical potential of electronics. It operates between two large socio-technical systems (Hughes 
1987), that of electronic commodities, and musical commodities, as discussed by the historian 
of audio technologies Roland Wittje (Wittje 2016, 7). It offers a combination of cultural, 
technical and musical studies approaches to best situate individual instances within this 
complex context. In other words, this study is concerned what ethnomusicologists Rene 
Lysloff and Leslie Gay call a technoculture. In this technoculture, various traditions of 
music theory build off centuries of cumulative knowledge production to contextualize and 
qualify harmony, rhythm, timbre and melody, as well as their sociocultural explanations and 
significance. However, these music theories tend to not incorporate electronics theory, even 
if they have come to often use electronically mediated analysis tools (Lincoln 1970; Mor, 
Garhwal, and Kumar 2019). A full understanding of the boundary work that led to this 


3 


situation is out of the scope of this study, however its consequences are visible in contempo¬ 
rary scholarship concerned with this electronic music. Lysloff and Gay write: “by examining 
technocultures of music, we can overcome the conventional distinction, even conflict, between 
technology and culture.” (Lysloff and Gay 2003, 2-3) In this statement, they are echoed by 
communication scholar Jonathan Sterne and musician Tara Rodgers, who later wrote: “me¬ 
dia criticism has become a standard practice across the humanities. We are simply arguing 
for the inclusion of signal processing within that critical lexicon, for in many cases it is just 
as important to the meaning of mediatic sound as the notes in a score, the choice of violins in 
a movie soundtrack, the words said or unsaid in a phone conversation.” (Sterne and Rodgers 
2011, 39) Musicologist You Nakai identifies this conflict, or lacuna in musicology’s toolkit, 
to contend with new non-discursive objects of study while considering the electronic work of 
composer David Tudor. Indeed, Tudor, after his shift from piano performance to composing 
electroacoustic works that involved the development of custom audio technologies (Tudor and 
Schonfeld 1972), remains little discussed: 

What Tudor was up to in those later days is mediated neither by discourse nor 
writing—the two primary objects of study in musicology (...) The attention 
paid to his activities with the piano is largely due to the inefficiency of our own 
analytical tools. (Nakai 2016, 6) 

Nakai elaborates in a later paper: 

The usual training of musicologists does not involve learning the necessary tech¬ 
nical tools to read, let alone analyze and interpret, these extant materials related 
to electronics. Simply put, scholars often find it difficult to trace circuits, read 
schematics, or understand the workings of resistors, capacitors, diodes, or tran¬ 
sistors, which dominate the bulk of Tudor’s archived materials. The last [issue is] 
of technical nature, which means that there is a way to solve the problem—what 
is required, simply put, is more work from the scholar. (Nakai 2017a, 3) 

The issue with illegible or difficult mediums of operation such as electronic circuits and 
assembly diagrams extends naturally to computer music, where code, even if it is often tex¬ 
tual, is not discursive and requires another type of reading expertise. Developing the nascent 
held of critical code studies, Mark Marino points out that “like other systems of signification, 


4 


code does not signify in any transparent or reducible way. And because code has so many 
interoperating systems, human and machine-based, meaning proliferates in code.” (Marino 
2020, 4) It is this proliferation of meaning in computer music code that make the medium 
an under-explored world for critical work. Georgina Born’s large-scale study Rationaliz¬ 
ing Culture , focused on the Institut de Recherche et de Coordination Acoustique/Musique 
(IRCAM) in Paris, France, which effectively acts as the country’s primary computer music 
research center, identified this potential twenty-five years prior to Marino. She prefigures 
Nakai’s statement by acknowledging that: 

The main limitation to my fieldwork was my lack of computer programming 
skills, which meant that although I was able to use very basic programs and to 
observe and question programmers with increasing insight, I was unable to enter 
fully the culture of musical software research and development that is a major 
and fascinating area of IRCAM’s work. (Born 1995, 8) 

Both Born and Nakai see the potential in discussing electronic music through the lens 
of the technical work that made it possible. Headley Jones, independent inventor of the 
electric guitar and of guitar amplifiers in Jamaica in 1947, offers an encouragement in this 
direction when writing about the study of his devices: “Persons who wish to call themselves 
musicologists ought to avail themselves of any opportunity to make a scientific study of 
the art of music.” (H. Jones 2010, 107) Most studies of electronic and experimental music 
(Holmes 2004; Gottschalk 2016; Warner 2017) acknowledge electronics in the development 
of their object of analysis, but rarely engage with a systematic technical study of said ob¬ 
jects. There are deeply technical readings of sound and music, from Helmholtz’ landmark 
On the Sensation of Tones (1863, translated into English in 1875), portions of P. Langford 
Smith’s 1,498 page fourth edition of the Radiotron Designer’s Handbook (1953), Kahrs & 
Brandenburg’s edited volume Applications of Digital Signal Processing to Audio and Acous¬ 
tics (1998), to Julius O. Smith’s towering work in digital signal processing (starting in the 
early 1980’s and continuing today), but modern discussions of technology in music seem to 
be concerned primarily with how things work (or can work, or should work) rather than who 
they work for, what they mean, and how they have been used for music. 

This study assembles the theoretical concepts and references necessary to tackle the 
lacuna identified by Nakai and propose a systematic approach grounded in both theoretical 


5 


and practical considerations of music to address this problem. In some sense, it is a response 
to German media theorist Bernhard Siegert’s provocation regarding music, notation and 
technology in modern electronic music practices: 

Musical notational systems operate against a background of what eludes represen¬ 
tation and symbolization—the sounds and noise of the real. Any state-of-the-art 
account of cultural techniques- more precisely, any account mindful of the techno¬ 
logical state of the art—must be based on a historically informed understanding 
of electric and electronic media as part of the technical and mathematical oper¬ 
ationalization of the real. (Siegert 2013, 15) 

1.2 Technologically dependent music after David Tudor’s Bando- 
neon ! (1966) 

In discussing Beethoven’s string quartets, Joseph Kerman illustrates that the question 
of form and experimentation have long been topics of discussion in music: 

An artist does not start out with a beautiful idea or emotion clearly in mind, 
which he then labors to project through cut-and-dried technical means. The idea 
becomes known only through the very process of its exploration; the most we can 
postulate is an inchoate dialectic between ends and means, between technique 
and expression -a dialectic. (Kerman 1982, 35) 

Taking this dialectic between technique and expression, let us focus on the mechanisms 
of its operation in the making of electronic music. How can the development of music and 
electronic objects be overlapping practices, and is that visible in the non-discursive objects 
identified by Nakai? The practice of art is irrevocably associated to some combination of 
media (Becker 1982, 3). Fischer elaborates: 

Form is also, to some extent, conditioned by materials. This does not mean, 
as some mystics would have us believe, that a certain form is ‘latent’ in a par¬ 
ticular material, nor that all materials strive towards their own perfection or 
‘de-materialization,’ nor that man’s desire to form materials is a metaphysical 
‘will towards form.’ But every material has its specific properties which allow it 
to be formed in specific though possibly varied ways. (Fischer 1963, 153) 


6 


Fittingly, musical applications of electronics emerge roughly at the same time as elec¬ 
tronics themselves (Teboul 2017b, chapter 1). Caleb Kelly comments on the question in 
relation to the work of composer Nicolas Collins: 

Collins believes there are two distinct approaches to the use of technology when 
composing a work. The first is to take a pre-existing piece of technology and 
look inside it. He compares this to Michelangelo looking for a sculpture inside a 
particular piece of stone rather than having an image and forcing that onto the 
stone. The second method is to design a circuit from the bottom up. (...) The 
circuit is thus the actual composition and not merely an instrument; the circuit 
diagram becomes a self-performing graphic score. (Kelly 2009, 246) 

The problem identified by Nakai relates to how music studies grapple with some of its 
more recent objects of study, shying away from electronics to focus on more traditional staff 
notation, text archives, and interviews. In this context, Kelly is claiming that a musical work 
can be so much about technology that the electronic instrument used in it defines it. This 
claim deserves to be historicized and detailed. One of his references is Nicolas Collins, a com¬ 
poser born in 1954 who was the student of composers Alvin Lucier (born 1931) and David 
Tudor (1926-1996). We can see an interest in the musical potential of circuitry through the 
lens of Lucier’s account of the group of composers he was working with starting in the 1960’s. 
Discussing Gordon Mumma’s piece Hornpipe (1967), he describes Mumma’s “cybersonic” sys¬ 
tem, which would interact to self-correct the electronic sounds produced semi-independently 
from Mumma’s French horn. He elaborates on what he sees as a complementary work by 
David Behrman, Runthrough (1967-1968), where a light-controlled system allows performers 
with flashlights to control various electronic oscillators and modulators. Lucier states: “In 
Hornpipe and Runthrough , there were no scores to follow; the scores were inherent in the 
circuitry.” (Lucier 1998, 6) 

Collins was a member of David Tudor’s Composers Inside Electronics ensemble (CIE), 
which was created to install and perform Tudor’s landmark piece, Rainforest IV (1973), 
and where Behrman and Munnna were regular guests. Tudor, since Bandoneon ! (1966) 
had abandoned the piano for electronics augmented with various acoustic resonators, and, 
occasionally, the bandoneon (Holzaepfel 1994). Each of his works after Bandoneon ! would 
include the development of a custom electronic system, a bricolage of various homemade or 


7 


commercial electronic equipment, often assembled with the help of various technically minded 
collaborators (Gordon Munnna, but also Lowell Cross, Billy Kluver or Forrest Warthman) 
and occasionally performed with CIE (Rogalsky 2006). 1 Tudor’s composition were idiosyn¬ 
cratic, leading to him having to be involved either as performer or facilitator for a significant 
portion of the interpretations of his works (CIE was motivated, in part, with some sort of 
apprenticeship model for this type of artwork). His performance notes, usually in the form of 
wiring diagrams representing various configurations to explore, are both reminders of what 
Tudor was intending to explore with any specific piece and a description of the circuits to 
be built and operated. There is no doubt as to Tudor’s capacity to use traditional notation 
- his deliberate move away from staff notation was both a practical and ideological decision. 
Indeed, they were steps to embrace what Tudor wrote of as the “view from inside”: 

Electronic components & circuitry, observed as individual & unique rather than 
as servomechanisms, more & more reveal their personalities, directly related to 
the particular musician involved with them. The deeper this process of observa¬ 
tion, the more the components seem to require & suggest their own musical ideas, 
arriving at that point of discovery, always incredible, where music is revealed from 
‘inside,’ rather than from ‘outside.’(Tudor 1976) 

Building these systems of electronics became Tudor’s mode of musical experimentation. 
By making space for “electronic components and circuitry,” Tudor could let the nature of 
his technological environment speak back at him and his audience through his assemblage of 
assemblages. Composer and fellow CIE member Ron Kuivila sees this as an exploration by 
Tudor and his students / co-performers of the artistic affordances of their techno-capitalist 
moment. 2 David Tudor’s legitimacy as a virtuoso pianist transferred over into his tinker¬ 
ing practice, making composing inside electronics a viable academic, cultural and musical 
endeavor in avant-garde music circles in the US and beyond. Equally important, however, 
is remembering that this technical basis of music practices was not unique to Tudor: for 
example, consider cultural theorist Julian Henriques’ documentation of dancehall sound sys¬ 
tems in Jamaica, a practice that has roots in electronics tinkering preceding Tudor’s time. 
Henriques writes: 

1. For more on the connection between musical bricolage and technological bricolages, see Henriques (2011, 
233) or Teboul (2020b). 

2. personal conversation with the author, Wesleyan University, 10/05/2015. 


8 


In the early days, this business of fine-tuning the set was even more critical than 
it is today, [Denton| Henry told me. This is because, as he said, there were 
“no knobs, [you] couldn’t adjust it.” This meant that any tuning adjustment, 
“before equaliser become so popular,” was literally hard-wired. It had to be 
done with a soldering iron, replacing and re-soldering certain electromagnetic 
components, such as resistors, “[bjecause at that time when you tune it was 
fixed. You couldn’t go out there and use the equaliser and vary it.” With the 
introduction of variable controls to be used for compensation, “about ’75 or maybe 
even early” it become technically possible to vary the output of the set, to take 
into account the particular conditions of the session. (...) In this respect, the 
ownership of a Sound is indeed the ownership of a particular sound. (Henriques 
2011, 71-72) 

Placing Tudor within a multicultural perspective of musical via electronics tinkering is 
outside of the scope of this dissertation, but it is an explicit frame for it. 

1.3 What does critically incorporating reverse engineering into mu¬ 
sic studies mean? 

I have suggested that we needed to engage with the electronics of music to better un¬ 
derstand the ways in which some instances electronic music is composed - especially in cases 
like Tudor’s or CIE’s where the music is directly affected by the affordances of the technology 
at hand. In other words, if there is what Nicolas Collins calls “music implicit in technol¬ 
ogy” (Collins 2007, 45), then perhaps music studies, when it concerns itself with electronic 
sound, can gain something from understanding the process by which its explorers go from 
technology to music and back. Following Nakai’s assessment, how can the mechanisms of 
the co-production of handmade electronic music devices and artworks be better understood? 

My suggestion here is simple: the tools that engineers use to assess these circuits and 
code can be critically and productively incorporated into music scholarship. This disserta¬ 
tion follows a number of preliminary publications which demonstrated the potential of this 
hypothesis (Teboul 2015, 2017b, 2018), building onto those to offer a systematic review of 
where, when and how these tools could and should be used. 

This is not an anti-humanistic proposal for a mechanical study of electronic music 


9 


unaware of the labor and users underlying its development. Rather, it is a mixed methods 
project outlining how reverse engineering’s relatively systematic approach can complement 
the traditional tools of music studies to address what 1 called technologically-dependent 
music above. If anything, reverse engineering musical artefacts - electrical or not - reveals 
the deeply intertwined nature of human labor and the physical properties of the materials 
that surround us. 

More specifically, this dissertation leverages principally qualitative modes of analysis: 
interviews, archival and primary source research, and reverse engineering. With these it 
exposes the mechanisms of human-machine interaction in what Nicolas Collins calls “hand¬ 
made electronic music.” (Collins 2006) Exposing implies an understanding of what makes a 
handmade electronic music work. As I detail in my third chapter, reverse engineering “is the 
process of analyzing a subject system to identify the system’s components and their inter¬ 
relationships and create representations of the system in another form or at a higher level 
of abstraction.” (Chikofsky and Cross 1990, 15). Based on this definition this dissertation 
presents the imprecise and poorly documented history of the practice across mechanical, 
electrical and computer engineerings. In doing so it shows how it has always been a flexible 
method if one has a wide-ranging appreciation of the potential object of study: “The primary 
purpose [...] is to increase the overall comprehensibility of the system for both maintenance 
and new development.” (16) If this can be applied to electronic music, it effectively shares 
objectives with music scholarship. 

1.4 A method, case studies, and pieces 

The overall project has three outcomes. The first is the development of a general 
method for the inclusion of technical materials, via reverse engineering, in techno-culturally 
aware music scholarship. The next chapter will outline useful precedents but also detail how 
there is at time of writing no cross-practice framework which helps us appreciate variation 
in individual pieces and studies when compared with other “handmade electronic music.” 
Individual case studies to the level of detail we are concerned with here, when they have 
been undertaken, tend to be at the level of the work or the composer, rather than the practice 
as a whole. 

The second outcome is a new framework which is refined after a basic presentation of 
the question’s implications (chapters 2, 3, and 4). Then, using a series of case studies which 


10 


both illustrate the potential of my method and the points of productive friction it contains 
(chapter 5), 1 present a set of comparative and complementary case studies, going across 
hardware and software, generative and through-composed pieces to prove the versatility of 
the method proposed. It features previously unpublished information on both canonical 
composers such as Steve Reich, and under-discussed practitioners such as Paul DeMarinis, 
Tammy Duplantis and Ralph Jones. 

The third outcome is an illustration of this method as a productive perspective from 
which to consider the production of new works (also included in chapter 5). If reverse 
engineering techno-musical work enables a deeper understanding of the labor involved at the 
time of the work’s invention, then it can also assist in the making of electronic media works 
of increased legibility. In other words: if one has a certain perspective on how past musical 
works involving electronics might be read, one can shape the development of new works with 
future readings in mind. To this end, I detail the new works that emerge out of some of 
my replication endeavors based on my reverse engineering projects. I outline the extent to 
which re-invention and analyses are complementary processes. The wide majority of past 
systems are incompletely documented, and as such replication requires informed guesses on 
how a specific work might be reimplemented “in the spirit of the original.” Re-invention 
offers space for experimentation, which is entirely in line with much of the philosophies of 
the original authors discussed in this project. 

These outcomes are of value to a number of perspectives. Indeed, this deep reading 
of past media with practical and theoretical applications thereof connects not only to music 
scholars, the primary audience and context within which this work is undertaken, but also 
cultural scholars of various kinds. First, one of the primary goals of reverse engineering is, 
“increased overall comprehensibility.” (Chikofsky and Cross 1990, 15) As such, outputs such 
as the case study will be of interest to educators and historians alike, both as studies of the 
works at hand and templates on which to elaborate new case studies. 

In a similar way, another goal of reverse engineering, as quoted above, is new develop¬ 
ments. This perhaps departs slightly from the common use of the word reverse engineering, 
which may traditionally not be considered a creative practice, even when the expert def¬ 
inition of Chikofsky and Cross makes clear that creativity is often an essential aspect of 
most reverse engineering projects. This is relevant to artists and creative engineers inter¬ 
ested in how incomplete documentation can generate new systems and the corresponding 


11 


new artworks. 

More generally, this dissertation challenges the belief that artistic work and engineering 
work are inherently different cognitive process engaging with separate aspects of creativity. It 
does so by offering a variety of past examples where people artistically interpreted engineering 
paradigms for their own purposes, and a number of new examples in which the process of 
composition is documented in detail as being equivalent to the process of inventing and 
re-inventing. 



CHAPTER 2 
State of the Art 


2.1 Initial texts: Pinch and Trocco, Collins 

This project was initially motivated by two texts: composer Nicolas Collins’ Handmade 
Electronic Music: The Art of Hardware Hacking (2006) and Analog Days by Trevor Pinch 
and Frank Trocco (2004). I will begin by presenting these two texts as the starting points 
from which to discuss more operational references. Briefly, the former offered a practice to 
focus on, while the latter offered a critical vision of technology to ground that focus in. 

Handmade Electronic Music is a foundational work for those interested in making elec¬ 
tronics instruments and music. It is presented as a collection of tips and tricks for the 
successful but only somewhat directed assemblage of safe and affordable electronic compo¬ 
nents for the sake of experimentation, and occasionally art. However, after repeated study, 
I believe it has understated critical potential, due in no small part to its author’s extensive 
interactional expertise (Collins and Evans 2002, 244) with regards to the cultural context 
in which making electronic instruments yourself takes place. It contains no definition of the 
term “handmade electronic music,” but the practice at hand is not ambiguous. 

Above, I had defined handmade electronic music as the coordinated making of elec¬ 
tronic instruments and electronic sound for an artwork. An exploration of the “music implicit 
in technology,” (Collins 2007, 45), handmade electronic music requires a curiosity for the elec¬ 
trical properties of materials, the musical potential of these things as their interconnections 
are made and broken, and our role in fostering contexts that encourage the exploration of 
those connections. 

Since Handmade Electronic Music's second edition in 2009, this approach to music¬ 
making has continued to build academic and artistic legitimacy as a viable mode of musical 
composition, without reneging on its tendencies for fragmentation and shape-shifting nature 
as a type of material production. 

Handmade electronic music, as a banner, offers a model for engaging with electronics 
knowledge in music. It is not a strict approach to measurement and analysis: Collins’ book 
strikes a balance between the use of circuit schematics and the lack of circuit analysis or, 
often, component values. This points to the unique design context that handmade elec- 


12 


13 


tronic music finds itself in: one where practitioners occasionally operate in terms of what 
sounds interesting to them rather than what, from a circuit perspective, “functions” or “is 
right.” This unique design space relates to what 1 have previously called, adapting a term 
from Anthony Dunne, “post-optimality.” (Teboul 2018) Defining the post-optimal as the 
productive potential of user-unfriendliness names a long-appreciated capacity for musical 
instruments to be valued in cultural production in the face of their impracticality, cost, and 
the low potential for monetary return (Evens 2005; Rovan 2009, 154). This flexible under¬ 
standing of success and function (where measurable performance is replaced by an aesthetic, 
and never definitive qualitative judgment) is inspired in part by the exploratory nature of 
Hannah Perner-Wilson’s Kit of No Parts. The “Kit of Parts” is a model common in large 
scale manufacturing, describing all the components necessary to assemble a product with 
specific measurable properties. Perner-Wilson’s Kit-of-No-Parts is a reminder that some 
qualities are not quantifiable, and therefore that manufacturing can be re-imagined from a 
less metric-heavy perspective (Perner-Wilson 2011). Perner-Wilson’s project resonates with 
Collins’ book in the sense that both follow occasionally less systematic approaches for the 
design of systems than the establishment of a set of specifications and their methodical 
achievement. As Collins details, there are visible parallels between composer John Cage’s 
chance operation and handmade electronic music’s erratic approach to circuit building. Cir¬ 
cuits and computer code can be assembled experimentally, by chance, a process which can 
and has been rendered performative (see, for example, the work of composers and performers 
Vic Rawlings or Bonnie Jones, discussed in Collins (2009, 112) or Teboul (2020a)). 

However Collins’ book engages in large part with post-optimality because it is also 
concerned with the hacking of existing devices, a practice not explicitly part of Perner- 
Wilson’s project (although it could be retroactively affixed to it). Here Handmade Electronic 
Music acts as a complement to Reed Ghazala’s book Circuit Bending: Build Your Oum 
Alien Instruments (Ghazala 2004, 2005), which intentionally features no circuit schematics, 
although it does offer numerous equally informative layout and connection diagrams. Where 
Collins breaks the engineering habit of needing to understand how everything works, Ghazala 
develops an idiosyncratic understanding of circuits into a political and aesthetic message 
which emphasizes accessibility beyond what he calls “theory-true” approaches (Ghazala 2005, 
12). Trevor Pinch, in a later paper, summarizes a series of interviews with circuit-benders 
by writing: “These sonic explorers actually place value on not knowing how circuits works 


14 


in terms of standard electronics and the schematics used to describe them.” (Pinch 2016, 9) 
Nevertheless, both hardware hackers and circuit benders see as a starting point the unrealized 
musical potential of pre-existing electronics. Handmade electronic music is different from 
circuit bending because it does not assign the same baggage to understanding what is being 
hacked (Collins 2006, 91). 

The premise for both Collins’ book and Ghazala’s book is the availability of inexpensive, 
user-modifiable raw materials, from individual components to full systems to be cannibalized. 
With these, and the techniques outlined in either book, hacking becomes equivalent to, 
or close to composition. 3 On the surface, this is the book’s purpose: recipes, with the 
understanding that success may not necessarily be what you are after because good sounds 
and compositions have no linear relationship with good engineering. Stated differently, 
Collins is advocating for a perspective that looks at vague recipes are best when a process 
is optimized for experimentation rather than function. Arguably, this is the un-avowed 
objective of a significant portion of handmade electronic music projects. Yet for all of Collins’ 
light tone, the book is still permeated with a deep historical understanding, one which results 
from decades of involvement in the practice of the music he is describing. Collins’ trajectory 
places him in a privileged position to do so: from being a student of Lucier and a member 
of Tudor’s CIE ensemble, to developing his now decades-long solo career and position as an 
educator and scholar, he has been at the epicenter for a number of relevant hot-spots for 
this kind of attitude towards sound and electronics. All the examples emerge from years of 
his own experience, and each chapter is side-barred with various references to past artists 
and pieces which either used the circuit being discussed or inspired its development. The 
hierarchy of content built between the how-tos and the peripheral context is clear. It affords 
the book a coherent accessibility which leaves ample room for a socio-technical analysis of 
the mechanisms that led to the existence of handmade electronic music in the first place. 

Pinch and Trocco’s Analog Days: The Invention and Impact of the Moog Synthesizers 
offers a solid foundation for such a socio-technical analysis. Trevor Pinch is a science and 
technology scholar, acknowledged for his hand in the development of the “social construc¬ 
tion of technology” or SCOT approach to historical understandings of technology in society 
(Bijker et al. 2012). One of SCOT’s primary contributions in this context is the concept of 

3. for more on this topic, see Teboul 2020b. For more on the tinkering and ad-hoc roots of standardized 
digital musics and MIDI, see Diduck 2018. 


15 


co-construction: the joint development of technology and the social field that surrounds it 
(Bijker and Law 1992; Pinch and Bijker 1984; Bijker 1995), or in Pinch and Oudshoorn’s 
words, the “mutual shaping of social groups and technologies.” (Oudshoorn and Pinch 2003, 
3) Co-construction is essential to the present project’s understanding of handmade electronic 
music - much of the following chapters will be dedicated to operationalizing this concept in 
the context of do-it-yourself (hereafter, diy) audio devices. Building on this perspective, 
Pinch been one of the primary scholars connecting the field of sound studies to science and 
technology studies (STS) (Pinch 2019; Marshall 2014). Analog Days stands as the reference 
in the history of Robert Moog and his various synthesizer designs, offering a sociological 
paradigm within which to read the complex history of the inventor and his products. The 
book’s introduction explicitly cites the work of French sociologist of technology Madeleine 
Akrich (born 1959) and her concept of scripts, de-scription and re-inscription (Akrich 1987; 
Pinch and Trocco 2004, 311, footnotes 20 and 21) to better understand the role of the user 
(Oudshoorn and Pinch 2003; Pinch and Collins 2014) in the co-construction of technical 
objects generally and electronic music instruments specifically. 

Pinch, like Collins, possesses first-hand experience both building and using electronic 
music instruments. Complemented by extensive research and interview work, Analog Days 
expands Handmade Electronic Music in that it provides a sociological framework for the 
discussion of handmade electronic music in the context of modern market capitalism, which 
pervasively shapes the relationship between designer, maker and user of any technology, 
musical or not (Theberge 1997; Taylor 2016). Summarizing, designers’ labor crystallizes an 
imagination of the ways in which users are expected to use the device being planned. This 
is the script contained in every technical object. Users, however, have agency, and are able 
to assess their relation to this script after using the device. If unsatisfied, they may attempt 
to soften the influence of the original script, what Akrich calls de-scription (Akrich 1992). If 
possible, they may also attempt to replace the manufactured script with a modified version 
which better suits their needs (re-inscription). Quoting Basile Zimmermann: 

Technical objects constrain what users do with them. They are not neutral 
entities; they embody information, choices, values, assumptions, or even mistakes 
that designers have voluntarily or involuntarily embedded in the technology. As 
a result, we often observe discrepancies between users’ needs or expectations and 
what the creators originally had in mind. (Zimmermann 2015, 5) 


16 


I am concerned here with how people deal with this discrepancy within electronic 
music. In this light, handmade electronic music could perhaps be alternatively defined 
as the response to such disconnects between needs of humans and abilities of technology. 
To address this concern, we must begin to appreciate the mechanisms which underly co¬ 
construction: this cannot be done without considering both Pinch’s socio-technical method 
and Collins’ proximity to the sonic consequences of technical objects. Pinch writes: “Thus 
far I have told this story without much reference to sound. In our book [Analog Days] we 
wrote ‘sound is the biggest silence’”. (Pinch 2019) The coordination of Pinch’s and Collins’ 
perspectives to address the actual sound of handmade electronic music outlines the external 
boundaries of my discussion, between sociology of technology and technologically inflected 
art practice, between the scripted technical objects of global industrial production and the 
unique devices of handmade electronic music. Although sociological-scale discussions of the 
electronics supply chain and what it makes available to specific users in specific time-spaces 
is out of the scope of this dissertation, this is clearly the underlying context which makes 
each case-study possible, and in turn every analysis meaningful. In the center of these two 
significant works and the supporting literature I have mentioned up to now is an opening, 
one which asks: can the insights offered by these two works guide a generalized theory of 
making electronics as a mode of composition? 

2.2 Operational references 

If the work of Nicolas Collins offered a topic to focus on, and the writings of Trevor 
Pinch and Frank Trocco gave it a critical framework on which to ground that focus, a few 
additional reference points are in some ways more central answering this question by an 
affirmative. Remembering Nakai’s provocation for a theory of music which could grapple 
with circuits, it seems important to note that although both books discussed above offer 
context for such a need, they do not £11 it because they do not detail the mechanisms by 
which handmade electronic music is co-constructed. 

There are publications in music studies which do echo that need for a technological 
understanding of electrically mediated musics. Following a brief review of these publications, 
this subsection introduces media archaeology and reverse engineering scholarship in more 
detail as influential precedents for this conjunction. 


17 


2.2.1 Music studies: ethnomusicology, musicology, organology 

Musicologist Thomas Patteson’s Instruments of New Music details the historical co- 
construction of early electronic instruments and related musical works in Germany prior to 
World War Two (Patteson 2016). Through an archival inquiry, rather than ethnographic 
study, Patteson shows how music technologies and new musics are developed as a conver¬ 
sation, one influencing the other and resulting in various artefacts along the way to act as 
records complementary to written, narrative accounts. When read alongside historian of 
technology Roland Wittje’s The Age of Electroacoustics, it shows us the extent to which it is 
possible to get a detailed image of this conversation, even a century later. Wittje writes: “the 
transformation of acoustics into electroacoustics went far beyond electric technology and led 
to a conceptual redefinition of sound.” (Wittje 2016, 19) This redefinition would effectively 
motivate calls for new and variously machinic conceptions of music developed in response, 
from Federico Busoni to John Cage (Patteson 2016, 13, 155). 

This call and the underlying process of techno-musical co-construction has modern 
consequences. As early as 1990, ethnomusicologists recognized that “music electronics could 
well be the common denominator for comparative study.” (Bakan et al. 1990, 38) Ethno- 
musicologist David Novak, discussing the development of modern Japanese noise music and 
echoing Pinch and Trocco, offers the following observation: 

A classic Noise setup is created from an interconnected assemblage of consumer 
electronics, often a group of small guitar effect pedals connected through a mix¬ 
ing board. Although individual setups vary greatly, Noisicians generally work 
with these inexpensive guitar “stompboxes,” also called “effects” (described by 
Japanese performers with the English transliteration efekuto), which are used 
both in live performance and in recording. (Novak 2013, 141) 

Reflecting on his study of these noise setups in the context of modern Japan, he also 
writes: “If not ethnography ‘on the ground,’ then, this is ethnography ‘in the circuit,’ follow¬ 
ing Noise through the overlapping, repetitive channels of its social and sonic feedback.” (13) 
However, Novak does not explicitly engage with the electronic circuits central to Japanoise. 
This is just one example of a musical subculture being in large part defined by its rela¬ 
tionship to electronics, with little scholarship documenting the technical work done by its 
practitioners. This is not an isolated phenomena in the sense that many other electronic 


18 


music subcultures are influenced by the affordances of the devices locally available; that 
phenomena is something that can be observed anywhere commodity electronics have been 
made available. Ray Hitchins, discussing Jamaican popular music (JPM), writes: 

Research has therefore led to the sounds of JPM being traced through the de¬ 
velopment and application of technology, which is adopted, adapted and recon¬ 
figured to meet local objectives, representing the cultural and creative pathways 
along which the logic of technology can evolve and transform sound. (Hitchins 
2016, 2) 

JPM, like Japanoise, seems to require a techno-social contextualization in order to be 
most accurately understood. Yet, the technological language used in the study does not allow 
Hitchins to describe exactly how these devices are adopted, adapted, and reconfigured. An 
additional example of musical practice to a different extent dependent on and defined by its 
relationship to technology is discussed in Caleb Kelly’s Cracked Media: the Sounds of Mal¬ 
function (2009). The curator offers a historization of technical glitches, failure, destruction 
and decay in electronic music and sound. He states: 

“Cracked media” are the tools of media playback expanded beyond their original 
function as a simple playback device for pre-recorded sound or image. ‘The crack’ 
is a point of rupture or a place of chance occurrence, where unique events take 
place that are ripe for exploitation toward new creative possibilities. As we will 
come to see, the crack takes a variety of forms, much like the practices introduced 
above, from gentle coaxing of faint crackle on the surface of a vinyl record to the 
total destruction of the playback tools. The practice utilizes cracks inherent 
in the media themselves— we cannot play a vinyl record without causing some 
damage to the surface of the disc—and leads to a creative practice that drives 
playback tools into territory where undesired elements of the media become the 
focus of the practice. (Kelly 2009, 4) 

This modern interest in technology across musicology and etlmomusicology reflects a 
corresponding interest in organology, the field dedicated to the study of instruments as a 
whole. Organology, as a solidified practice, emerges in the study of western music with the 
efforts of Victor-Charles Mahillon (1841-1924), a Belgian musicologist and founder of the 


19 


first instrument museum in Brussels, and of the Austrian ethnomusicologist Erich Moritz 
von Hornbostel (1877-1935) and his associate Carl Sachs (1851-1959), a German-American 
musicologist. The Hornbostel-Sachs system, a refinement of the Mahillon instrument taxon¬ 
omy (Mahillon 1893), was published in German in 1913 before being translated in English 
in 1961 (Von Hornbostel and Sachs 1961). It was aware of its own sedimenting potential in 
the face of a highly fungible and variable practice, opening with: “The objects to be clas¬ 
sified are alive and dynamic, indifferent to sharp demarcation and set form, while systems 
are static and depend upon sharply-drawn demarcations and categories.” (4) It is still the 
primary mechanism by which museums conceive of and classify their musical collections. 
Although it is reminiscent of Darwinian evolutionary models, in accounting for the history 
of each instrument, it also attempts to link instances via morphological and historical prox¬ 
imity. Of course this is meaningful in the sense that morphological similarities often reflect 
historical connections, in the same way that the concept of chaine operatoire facilitates the 
understanding of tool manufacture based on an archaeology of tool manufacturing processes, 
by products and results (Farbstein 2011, 408). However, as noted by Claudio Gnoli, a scholar 
of classification systems, this double allegiance to both mechanics and history has inherent 
frictions which are inconsistently resolved: 

Lyres are defined as yoke lutes where “the strings are attached to a yoke which 
lies in the same plane as the sound-table and consist of two arms and a cross¬ 
bar”. The crowth, a medieval instrument documented in iconographical sources, 
in its initial form fell under the definition of lyres; but later it got a neck, so that 
it is no more a lyre, though being the development of a lyre. A more familiar 
example is piano, which is classified among table zithers, as in first pianos strings 
were just tightened on the sound-table; however, later pianos contain a cast iron 
frame, on which strings are now tightened, so that strictly it should be considered 
as a frame zither instead. In the latter case, the genetic criterion prevails in the 
classification, while in the former what prevails is morphology. (Gnoli 2006, 144) 

The friction that Gnoli identifies was exacerbated by the increasingly prevalent pres¬ 
ence of electronics in musical instruments. Although electronic instruments existed in 1914 
(thinking of such incongruities as the Telharmonium, the Singing Arc or the Coralcelo) 
they are not included in the original Hornbostel-Sachs classification system. The intellec- 


20 


tual and practical issues caused by this oversight prompted the revision of the system in 
1940, with the addition of an electrophone class. However, the development of microphones 
which could amplify vibrations in solids, air, and ferromagnetic mediums meant that any 
vibration could be transduced into electrical energy, amplified, and reinjected in another 
instrument, effectively linking any of these genealogies to any other. Gnoli already pointed 
out how the pre-revision system exhibited flaws, but electrification of musical instruments 
effectively relinquished both organology and its taxonomic priorities to museums and ethno¬ 
graphic studies of acoustic instruments. Margaret Kartomi, an ethnomusicologist, in her 
cross-cultural analysis of instrument classifications, opens her edited volume with a chapter 
titled “Any classification is better than chaos” (Kartomi 2001, 3) before dealing mostly with 
a comprehensive cross-cultural review of taxonomies that nevertheless does not address the 
challenge presented by the possibilities of electric music. Organologists who deal with this 
issue, such as Emily Dolan (Dolan 2012, 2013), Deirdre Loughridge (Loughridge 2013), or 
Thor Magnusson (Magnusson 2017) acknowledge the development offered by electronics in 
music but do not engage with the potential of circuit theory or computer code analysis, even 
when it is within their expertise. 

In defense of music studies, the incorporation of technical analyses of electronics has 
correspondingly eluded most fields in the humanities. Even Roland Wittje, who recently 
states from a history of technology standpoint that “to understand these transitions [from 
acoustic technology to electro-acoustics], we must consider both the development of electric 
technologies and the changing understanding of electrodynamics- especially electric oscilla¬ 
tions and electric circuit design,” (Wittje 2016, 19) does not offer a method for the analysis 
of circuits, remaining primarily in the domain of humanistic study of how and when these 
objects were used rather than how exactly they were built. This is even though he is clearly 
able to engage with circuit schematics (Wittje 2013, 43, 53), begging the question of how 
much of this is due to disciplinary conservatism on the part of some publishing institutions 
rather than a lack of interest. 

Forays into a truly hybrid, techno-social reading of electronic music have however oc¬ 
curred in various embryonic forms which can act as starting points. Kenny McAlpine’s 
discussion of chiptune and low-fi video game sound delves extensively into the material and 
sonic implications of early computing devices (McAlpine 2018). Basile Zimmermann’s dis¬ 
cussion of electronic music practices in China, especially as they relate to the collaborative 


21 


development of Max/MSP software he was involved in, are reminiscent of some of the dis¬ 
cussions in my upcoming chapters (Zimmermann 2015, chapter 9). George Lewis’ extended 
review of his “Voyager” system and various related questions in improvisations and agency 
has brief discussions of the computer code he’s developed to explore his relationship to au¬ 
tonomy and player hierarchy in music (Lewis 1999, 2000, 2007, 2009, 2017, 2018). Although 
Brian Miller’s discussion of Voyager highlights the plethora of topics relevant to the piece 
(quoting Lewis, from “ethics” to “the histories of peoples and nations”), engaging with the 
technical mechanisms connecting the piece to those topics are out of the scope of his discus¬ 
sion, which focuses on music theory (Miller 2020, 276-283). Megan Lavengood’s dissertation, 
recently adapted in article form, details the legacy of the Yamaha DX7 synthesizer and intro¬ 
duces a systematic use of spectrograms to alleviate musicology’s difficulties in dealing with 
timbre (Lavengood 2019). Nakai’s dissertation dissects Tudor’s works at the system level, 
but doesn’t systematically address the complexity of various subsystems’ interactions and 
their impact on the recorded results of Tudor’s work (Nakai 2016). Ted Gordon’s disserta¬ 
tion provides detail into how Donald Buclila (1937-2016), Pauline Oliveros (1932-2016), and 
Morton Subotnick (born 1933) interacted with technology to claim their place in electronic 
music history, but it does not examine specific circuits in relation with specific compositions 
(Gordon 2018). Seth Cluett’s writing examines the role of the loudspeaker as a musical 
object to great depths, but also prioritizes cultural and social perspectives over technical 
discussions (Cluett 2013). The work of Jan-Peter E.R. Sonntag and Sebastian Doring on 
Friedrich Kittler’s homemade synthesizer is technically thorough and philosophically rich, 
but remains implemented within the unique confines of this intellectual heritage (Doring 
and Sonntag 2019). Ethnomusicologist Andrew Dewar’s discussion of the Sonic Arts Union 
is influential here, by its discussion of the “handmade” sound, but it remains a cultural inquiry 
rather than a techno-cultural one. The “Repertory” project, implemented by Miller Puckette, 
Kerry Hagan and Arshia Cont, although it offers fascinating technical re-implementations of 
classic electroacoustic works, does not investigate the socio-historical context of each artwork 
or expand documentation of the original project (Puckette 2001); it is more concerned with 
re-invention based on scores than with reverse-engineering and re-implementation. 4 Suzanne 

4. It seems important to emphasize that I am interested in technologies that somehow closely embody 
some sort of artwork, rather than instrument design work widely construed. In the case of the latter, some 
detailed analysis and modelling projects have been undertaken, some with public documentation. For a short 
list of examples, see the end of Teboul (2019). 


22 


Thorpe and Alex Chechile’s work on an analog reworking of I of IV by Pauline Oliveros 
stands out as an example of previous work in which a performance practice and an experi¬ 
mental approach to reverse engineering a musical system and its social context led to both 
a better understanding of the original work and a new performance (which took place at 
ISSUE Project Room in New York in 2012). The insights regarding this project, originally 
presented at McGill University in 2017 (Chechile and Thorpe 2017), are forthcoming. 5 

There are a number of dissertation projects which frame my present one, but also do not 
overlap in terms of explicitly wanting to offer a generalizable approach to studying custom 
electronics in music. David Kant’s dissertation delves into his automatic transcription system 
in great technical depth (Kant 2019). Frances Morgan’s current research promises to offer 
an extensive and much needed history of Peter Zinovieff’s EMS synthesizer brand(Morgan 
2017). Madison Heying’s discussion of Carla Scaletti and the Kyma system is invaluable as an 
example of a community-wide study of a practical, tool-based approach to music and sound 
(Heying 2019). Jack Armitage’s valuable work surveying the potential of digital luthiery in 
instrumental education offers much needed groundwork for the social aspects of organology 
in our information communities (Armitage, Morreale, and McPherson 2017). Eamonn Bell’s 
discussion of a “computational attitude” within music theory is inspiring insofar as it traces 
an attitude towards sound and music across disparate practitioners (Bell 2019). Catherine 
Provenzano’s discussion of the Autotune algorithm at the level of both individual artists 
and its wider significance in popular music culture is also relevant in terms of its breadth 
of scales(Provenzano 2019). Joe Pfender’s project on the mystical and the technical within 
electromagnetic tape composition practices offers a precedent for the thorough consideration 
of a technical medium’s cultural significance (Pfender 2019). These all deal with technicity 
in great detail within the confines of a specific practice, but they also do not claim hopes 
to generalize their analyses to a theory of practice of electronic instruments. To summarize: 
there are a number of study-specific analyses, but there is no precedent for a widely applicable 
type of literacy of the kind that Nakai calls for. 

Kurt Werner’s dissertation (Werner 2017) and related articles (Werner 2014; Werner 
and Abel 2016; Werner 2017, 2018) does appear as complementary to the outline of the 
theory I offered in my masters thesis (Teboul 2015). 6 It is this outline I am developing here. 

5. Email with the author, 6/7/2020. 

6. There is also a dissertation on the role of the capacitor in electronic music by Christina Dorfling that 


23 


By proposing a technical understanding of musical electronics deep enough to create “circuit- 
bendable” digital models and complementing these with a sociological understanding of their 
original designers, Werner fulfills the objectives of reverse engineering as a method of inquiry, 
and offers unique insight on how to appreciate electronic music instruments’ influence on the 
pieces made with them as well as the social construction of that influence. Where Nakai did 
not dig down to the component-level understanding of technical systems in Tudor’s work, 
Werner offers us an example of how that could be done. Counterbalancing that, however, is 
the point that Werner did not (and was not trying to) contextualize the music of the TR-808 
(McKittrick and Weheliye 2017) or the other circuits he’s modeled in the same way that 
Nakai was providing a key to understand Tudor’s music, widely construed. Therefore, the 
question 1 am concerned with in the remainder of this dissertation is: how can we combine 
the analytical processes presented in the works of Nakai and Werner to present a thorough 
techno-cultural reading of modern electronic music? 

Phrasing this concern differently, Nakai discusses the traditional tool set of music stud¬ 
ies, distinguishing two primary modes of analysis: first, one whose vector is musical notation, 
and another, which is based on the social study of the relevant user groups, and the various 
methodologies developed to approach those user groups, such as interdisciplinary perfor¬ 
mance studies (Cook 2013, 24-25). Historical studies tend to collect work which use these 
primary modes and collate them into historical narratives at the scale of a person, a group 
of people, or a meaningful time period. Methodologically, my work offers another primary 
vector in the context of electronic music: the technical object. 

2.2.2 Media archaeology 

A guiding principle of this research project is to look outside of music studies for critical 
perspectives on technology that may be applicable to handmade electronic music practices. 
In this section, I will review two such perspectives: media archaeology and reverse engineer¬ 
ing, specifically as it is conceived of in the digital humanities and history of technology. 

Jussi Parikka, one of media archaeology’s primary exponents, writes that the held is 
“a way to investigate the new media cultures through insights from past new media, often 
with an emphasis on the forgotten, the quirky, the non-obvious apparatuses, practices and 
inventions.” (Parikka 2012, 2) Finding its roots in the genealogical and archaeological project 


remains untranslated in languages accessible to me at this time, but may also be relevant (Dorfling 2019). 



24 


of French historian Michel Foucault, Parikka and Erkki Huhtamo elaborate on the premise 
of the held, building off German theorist Friedrich Kittler in a manner reminiscent of Nakai’s 
criticism of musicology vis-a-vis an engagement with the non-discursive technical media of 
Tudor’s legacy: 

Kittler argued for the need to adjust Foucault’s emphasis on the predominance 
of words and libraries to more media-specific ways of understanding culture. 
According to him, the problem was that “discourse analysis ignores the fact that 
the factual condition is no simple methodological example but is in each case a 
techno-historical event.” To be able to understand media technologies from the 
typewriter to the cinema and on to digital networks and coding paradigms, one 
must take their particular material nature into consideration. (Huhtamo and 
Parikka 2011, 8) 

This need has led a scholar in the wake of Kittler’s work, the German historian Wolf¬ 
gang Ernst, to develop a perspective which Parikka summarizes as “operative diagrammatics”: 

What could be called ’operative diagrammatics’ refers in the case of Ernst to a 
specific way of understanding the objects of media studies and the way they feed 
into theories concerning ’materiality-in-action.’ Operative diagrammatics is the 
level where mathematics is incorporated into our technical media machines, and 
hence the real world. Instead of the story, narrative, or image, Ernst’s media 
archaeology posits the diagram as the starting point for an analysis of technical 
media culture: diagrams are to be understood in the very technical sense of a 
visualization of information patterns, circuits and relations which give an idea of 
how the otherwise so complex machines work. (Parikka 2011, 62) 

This is immediately relevant to a project like mine, concerned with the inclusion of 
technical media on the terms of the language used in its design process. However, upon 
closer inspection, rarely do Ernst’s works actually use mathematics or engineering concepts 
to discuss technical media machines.' Recent translations such as Chronopoetics (Ernst 
2016a) or Sonic Time Machines (Ernst 2016b) do contain extensive discussion of media 
experiences inflected with mathematical concepts, such as: 


7. or at least, those accessible in English at time of writing. 


25 


“This indescribability only disappears when a time range is successfully trans¬ 
formed into a frequency range entirely without metaphysics or a philosophy of 
history,” wrote Friedrich Kittler, with regard to the techno-mathematical process 
of fast Fourier transformation, which indeed replaces the time axis as the classical 
abscissa of causal chains with a frequency axis, whose units are inversely propor¬ 
tional to its units of time as evidenced metrologically on an oscilloscope. (Ernst 
2016a, 6) 

However, the longest mathematic formula in this book is y — f(x) (65). There is 
no in-depth investigation of specific devices that engages with the technical documentation 
often produced alongside such devices, and there is little discussion of the potential of doing 
so. I am not arguing for a mathematical conception of electronic music as realized by analog 
and digital electronics, rather, I am interested in exploring this space left open by Ernst’s 
work. I hope to build on his and Kittler’s prefiguration of Nakai’s comment regarding the 
legibility of technical media, taking their logic to a further level of engagement with technical 
documentation (and utilizing mathematics when appropriate). 

In doing so I do not follow Ernst’s “cold gaze” on media. Dehning this perspective, 
Parikka writes: “The trope of the ‘cold gaze’ is for Ernst a way of stepping outside a human 
perspective to the media epistemologically objective mode of registering the world outside 
human-centered sensory perception.” (Parikka 2011, 71) Parikka already outlines the poten¬ 
tial issues in his mathematical project (as unrealized as it may be) and its temperature: “one 
could argue that part of the ‘techno-mathematic’ epistemology and the cold gaze remains 
coolness in the sense of wanting to remain further away from the messy world of political 
and social issues.” (67) Here I side with author and theorist Darren Wershler, who states: 
“there is no way to abstract the study of signal processing from its cultural articulations 
that doesn’t irreparably compromise research.” (Wershler 2016) In this short piece, Wer¬ 
shler works from cultural theorist Stuart Hall’s concept of articulation to resituate media 
archaeology within a humanist perspective (Hall 1996). Here, it seems important to remem¬ 
ber that, fitting the double meaning of Stuart Hall’s concept of articulation, the historian of 
technology Mara Mills writes “The term signal processing was at first mostly applied to dig¬ 
ital coding and vocoder-based compression.” (Mills 2012, 116) Julian Henriques, discussing 
Jamaican dance-hall culture, foreshadows Wershler with his own articulations: 


26 


In media theory, Kittler (1999), for example, uses such a concept of materiality of 
media for drawing attention to the instrumental role of the technological means of 
communication. This emphasis has a fascination, as with for example considering 
Nietzsche as the first philosopher to write on a typewriter. But this tells only 
one side of the story. Without the sociocultural and corporeal wavebands of 
sounding - its sense and meaning - this approach inevitably tends to encourage 
technological determinism, matched by its opposite of voluntarism. (Henriques 
2011 , 211 ) 

As I will show, there is in fact no way to abstract the study of signal processing from its 
cultural articulations-without its “corporeal wavebands of sounding.” The co-constructive 
process that led them to exist have inextricably baked them into each other’s realities. I argue 
here for a fiery gaze on technology, one which wholeheartedly embraces the messy nature 
of its arbitrary contingencies via a study of art’s embodied knowledge production. In Henk 
Borgdorf’s words: “Part of the specificity of art research (...) lies in the distinctive manner in 
which the non-conceptual and non-discursive contents are articulated and communicated.” 
(Borgdorff 2012, 48) My goal is not suggest a return to technical analysis in music studies: 
rather, it is to present how studying the technical aspect of contemporary sound elucidates yet 
another essentially social and relational aspect of its practice, deepening our understanding 
both of that practice and of its place in the wider context of music. Here I keep in mind 
Shannon Mattern’s highlighting of media archaeology’s mission in helping us more effectively 
assess the past by investigating media by challenging the novelty apparent in obscuring the 
past: 


It’s this fetishization of “the new” that media archaeology is intended to counter, 
in part by encouraging consideration of the epistemologies materialized in differ¬ 
ent historical media formats. (Mattern 2017, 153) 

Here, it is precisely this consideration of epistemologies materialized in handmade elec¬ 
tronic music and its myriad different formats I hope to apprehend. Above, I have presented 
strands of music studies acknowledging the role of technology, and the need for a deeper en¬ 
gagement with the set of knowledge-rich artifacts it produces; and made a case for the fact 
there were unexplored possibilities. I have outlined concepts from science and technology 


27 


studies such as scripts and co-construction which are helpful in understanding the mechan¬ 
ics by which electronic instruments and musical works come to exist. I have connected this 
line of inquiry with the recent perspective offered by media archaeology. With each nod to 
these disciplines I have specified in more detail the space in which this dissertation operates. 
To conclude, I will review the relevant literature in circuit theory and reverse engineering. 
In this context, my critical perspective on the practices will act as the center point of my 
method which addresses the lacuna identified by Nakai, Born, Kittler, Ernst and Parikka 
(the lack of engagement with technical objects on their own terms). 

2.2.3 From circuit theory, to systems theory, to reverse engineering of circuits 
and code 

My working hypothesis here is that understanding how electronic music devices shape 
the music made with them requires understanding how these electronics function and are 
used. Thorough fieldwork such as that done by Madeleine Akrich in her landmark study of 
solar power electronics in “Comment Decrire les Objets Techniques?” (Akrich 1987, 10) or 
by Georgina Born for her study of IRCAM (Born 1995) can precisely describe interactions 
between technical objects and humans; however, my suggestion follows Kittler, Ernst and 
Nakai to say that there often is more to a situation that can be uncovered from a careful 
analysis of the devices and their operation. 

This requires the incorporation of both technical knowledge and technological history. 
Of course, the two are linked, so I will present a history of references describing the evolution 
from circuit theory, to systems theory, to reverse engineering and how these are applicable 
to this study of handmade electronic music. 

Circuit theory emerges in the nineteenth century from electromagnetic theory. The 
founding theorems of the discipline, Ohm’s law (1827) and Kirchhoff’s laws (1845), see rel¬ 
atively little use before the development of electronics research as the very active held it 
will become around and after the world wars. Lee De Forest’s Audion tube (1906), the 
first commercially available active amplification device 8 mean that the number of articles on 
circuit theory go from less than one a year in 1910 to over one hundred in 1954. Vladimir 
Belevitch, a radio engineer and historian of circuit theory, writes: “Long before 1914 cir- 

8. It should be noted that the Audion offered inconsistent performance until it was perfected via a series 
of patent buyouts in the thirties and forties, before being replace by semiconductors following through the 
sixties (Tyne 1977). 


28 


cuit theory had emerged, from general electromagnetic theory, as an independent discipline 
with original concepts and methods. The conception of a circuit as a system of idealized 
lumped elements is already firmly established drawings of Leyden jars and rheostats have 
gradually disappeared in favor of the now familiar graphical symbols.” (Belevitch 1962, 848) 
Lumped elements abstract individual components, casting consideration of inconsistencies 
across the conducting medium aside. Scaling up from components to circuit topologies, a 
theory emerged using theoretical constructs developed by Ohm, Kirchhoff, Thevenin and 
other researchers to replace combinations of basic components (resistors, capacitors, induc¬ 
tors, vacuum tubes, transformers, later diodes, transistors, etc.) with singular black boxes 
which mimic the behavior of those combinations mathematically. A direct consequence of 
relevance to reverse engineering, then, is the concept of “divide and conquer” (Hamscher and 
Davis 1984, 142). If groups of components can be replaced by an abstraction, then divide 
and conquer approaches to understanding or troubleshooting a circuit or electronic system 
state that if a system is too complex to be understood in a single glance then theorems and 
experience can be used to divide the circuit into its functional subsystems. If black-boxing 
extended lumping’s logic from components to entire subsystems, divide and conquer is the 
strategy by which large systems are broken back down into these functional subsystems, 
the intermediate level at which a meaningful amount of reverse engineering operates at. 
Belevitch continues: 

The progress in circuit theory during this period is connected with the devel¬ 
opment of long distance telephony in more than one respect. First, the theory 
of loaded lines is closely related with the invention of the electric filter, which 
found an immediate application in carrier telephony (first realization in 1918, 
Pittsburgh-Baltimore). Secondly, the design of bidirectional amplifiers (two-wire 
repeaters) immediately raised several problems of network synthesis (hybrid coils, 
balancing networks, Liters, equalizers), not to mention the question of over-all 
stability. (Belevitch 1962, 849) 

The development of electronics theory after this effectively reiterates the concept of 
lumping components to divide and conquer circuits. This desire to control circuits goes 
both ways—quoting engineering Sydney Darlington: “ ‘Circuit Analysis’ determines char- 


29 


acteristics of given circuits. ‘Network Synthesis’ is the inverse. 9 It determines circuits with 
given (desired) characteristics.” (Darlington 1999, 5) Circuits begin to be described using 
differential equations and matrix operations on systems of differential equations, this knowl¬ 
edge allows the robust design of black-boxed systems as having the characteristics needed 
for various aspects of radio or transatlantic telephone and telegraph cables. In other words, 
these concepts and artifacts emerge co-dependent out of a desire to achieve specific human 
connections: electronic components, circuits, and the mathematics of communication used 
to describe them, are, in the early twentieth century, co-constructed. Darlington again: 

The development of submarine cables had an important influence on the develop¬ 
ment of circuit theory. Initially, transmission lines such as submarine cables were 
poorly understood (...) An understanding of transmission lines brought in such 
concepts as propagation constants (attenuation and phase), matched impedances, 
and reflections at nonmatching impedances. Lumped loading (initially proposed 
by Heaviside) was an ancestor of image parameter filters. Grounded versus bal¬ 
anced lines and “phantom” circuits brought in the concepts of three-terminal 
versus balanced two-ports and longitudinal versus transverse currents (5). 

As the mathematical modeling of electrical systems becomes more sophisticated, de¬ 
sign and manufacturing separate to more efficiently take advantage of individual skills in 
circuit theory or metallurgy and work within the fullest extent of industrial possibilities. 
Co-construction of electronics begins to take on the global scale we see it in today. Cir¬ 
cuit analysis and its predictive counterpart of network synthesis become systems theory, 
unified by equations and diagrams. Historian of technology Edward Jones-Imhotep states 
that “practices of technical drawing are an integral part of the practice of electronics writ 
large: it is rare to fold a source on computing that does not contain some images, and al¬ 
most impossible to find such a source on electronics” (Jones-Imhotep and Turkel 2019, 108), 
thus illustrating the continued practical motivations of the held. L. A. Zadeh, an electrical 
engineering professor at U.C. Berkeley, describes the shift as being due to a fundamental 
similarity between various systems, where the mathematics that govern circuit analysis and 
network theory have mechanical and magnetic analogs. As he states, that “is the princi¬ 
ple behind analog computation.” (Zadeh 1962, 856) Concepts such as feedback, filters or 

9. This is the inventor of the Darlington pair (Horowitz 2015; Darlington 1953, 109). 


30 


amplifiers and the models powering their design become entwined with the discipline of cy¬ 
bernetics, which attempts to extent the analogy to social, biological and global dynamics 
of any shifting quantity. The British historian of technology David Bissel summarizes this 
evolution: 

From the end of the nineteenth century, communications engineers developed 
a new approach to the use of mathematics, initially through the use of pha- 
sors, and then through increasingly sophisticated time- and frequency-domain 
modelling. The abstraction of system components led to a “metalanguage” in 
which the manipulation of circuit configurations and other symbolic representa¬ 
tions became a natural consequence of - and, increasingly, an alternative to - the 
mathematics. Systems ideas that originated with communications engineering 
were extended to other domains such as control engineering, and a highly signif¬ 
icant synergetic relationship developed between the mathematical modelling of 
systems and components, the design of instrumentation, the development of the 
analogue computer, and the design of devices and systems (Bissell 2004, 607). 

Although many concepts and individual technical developments are traceable to a 
series of papers, reports, and other by-products of the knowledge production process, the 
coordination of these efforts in the post-war context is what is of interest here. Indeed, 
the drive for a succesful invention and production of various artifacts, when combined with 
the legal and geopolitical context of the era and the coveted nature of communications 
devices (Wilf 1986) means that documentation is inevitably lost or unavailable when needed. 
Therefore, recovering designs from existing systems (hardware and/or software) becomes an 
essential skill and process. As such, M.G. Rekoff, an electrical engineer at the University of 
Alabama-Birmingham, writes in 1985: 

Reverse engineering might seem to be an unusual application of the art and sci¬ 
ence of engineering, but it is a fact of everyday life. Reverse engineering may 
be applied to overcome defects in or to extend the capabilities of existing ap¬ 
paratus. Reverse engineering is practiced by the General Motors Corporation 
on Ford Motor Company products (and vice versa) to maintain a competitive 
posture. Reverse engineering is practiced by all major military powers on what¬ 
ever equipment of their antagonists that they can get their hands on. Reverse 


31 


engineering might even conceivably be used by major powers to provide spare 
parts and maintenance support to smaller powers who are no longer friendly with 
the original manufacturers of the weapons they have in their inventory. Reverse 
engineering is just a special case of systems engineering (Rekoff 1985, 244). 

The rampant complexity enabled by systems theory and the incorporation of circuit 
analysis and network design into a generalized theory of control and communication could 
therefore be seen as requiring the development of an intellectual tool for backtracking through 
what has been assembled. Yana Boeva, a historian of technology, further reminds us with her 
colleagues that “even when the historical sources were not intentionally designed to mislead, 
they assume a fair degree of tacit knowledge, and experimentation with small models has 
been useful for bringing some of this experience into sharp relief.” (Boeva et al. 2018, 165) 
Rekoff offers a robust model for the implementation of reverse engineering in a circuit 
context, while Biggerstaff or Chikofsky and Cross elaborate on its implications for software 
(Biggerstaff 1989; Chikofsky and Cross 1990). Both will be discussed in detail in the next 
chapter, but to conclude I discuss the historical potential of reverse engineering and the 
only very recent interest in reverse engineering as a tool of historical potential in media 
archaeology, STS and the digital humanities. Boeva et al. add: 

Within the history of technology, scholars have long privileged large-scale indus¬ 
trial technologies as sites to understand social transformation. But electrical or 
transportation infrastructures lend themselves poorly to experimental, hands-on 
exploration. By shifting the scale to smaller, often more intimate objects such as 
household technologies and personal computing devices, re-making or reverse en¬ 
gineering can provide unusual insights about the micro-level contours that shape 
our broader historical understandings. And in the absence of historical evidence 
about the design, use, and impact of material devices, those practices can help 
us to reenact and reimagine portions of the historical worlds they occupied and 
the meanings they held. (Boeva et al. 2018, 164) 

In the same volume as Boeva and her colleagues, I elaborated on the pronounced 
importance of reverse engineering as a tool to identify and evaluate knowledge transfer, es¬ 
pecially in the context of electronic music devices (Teboul 2018). Boeva et ah, although they 
do mention three projects in which reverse engineering was used, do not offer an analysis 


32 


of the electrical circuits developed by the original authors, preferring to focus on cultural 
interpretations of the meanings of these projects. This points to a larger question: because 
of the unique legal context surrounding the legal protections of circuitry (Orman 2016), and 
considering the financials of handmade electronic music in contemporary commodity mar¬ 
ket conditions, there is a widely shared opinion amongst relevant practitioners that circuits 
rarely are entirely new (Teboul 2015, see interview with Dan Snazelle). In a sense, a number 
of electronic inventions cannot be traced back to their designers because they did not work 
to claim it as new - this relates to what the historian Allison Marsh, in her discussion of the 
development of the factory, calls anonymous technology: “technology that has adapted and 
evolved throughout history to meet specific needs with no known inventor.” (Marsh 2019, 
147) A thorough discussion of electronic inventions as anonymous technologies is outside the 
scope of this dissertation however it is important to place handmade electronic music within 
this conflicted space of claiming developments as having a single author or as having been 
invented, when each of these developments tends to be the result of incremental and repeated 
experimentation. The concept of anonymous technology, in the context of handmade elec¬ 
tronic music, and diy electronics in general, is an invitation to examine the existing historical 
record: not to find mythical origins, but rather to assess the evolving cultural significance of 
these developments. Because of their contested commercial and personal nature, handmade 
electronic instruments are choice subjects for a history via reverse engineering. 

Having established the modern knowledge boundaries of this research project, the next 
chapter will sketch the method I will use in chapters five and six in order to address the 
questions established in the introduction. 


CHAPTER 3 
Method 


As discussed in the introduction, this project is practice-based research. In this chapter, I 
detail an approach to a theory of analysis of past handmade electronic music works which 
has consequences for my own practice. The latter will be detailed in chapters five, six and 
seven. 

As this project involves research with human subjects, this methodological framing 
was subject to an institutional review, with specific attention given to its ethnographic 
component. The Institutional Review Board approved a protocol in March 2017 which 
outlined five central and roughly ordered steps of the method used in this project. 10 Since 
this process solidified my approach for this project, I’ll begin with presenting these five steps 
before offering more detail for each. 

The five steps were: 

• The selection of a handmade electronic music work. 

• An in-depth analysis of the corresponding musical system, using materials found via 
archival research or intentionally provided by the maker of the device. 

• Communication with makers of the work to discuss my interpretations of their musical 
intentions based on each analysis. 

• An interpretation based on feedback from the maker and my analysis to conclude how 
this example fits within and informs an understanding of handmade electronic music 
practice. 

• The production of a document detailing the process for review by the informants. All 
interviewees and makers were given opportunities to review the final document for 
feedback. 

In practice these steps display significant overlap. It is possible to build digital and 
analog models of a system before having fully investigated an original, and it is possible to 

10. The five steps and the interview questions below are roughly adapted from this protocol. 


33 


34 


know roughly which questions to ask prior to that such an investigation as well. In some 
circumstances, opportunities to discuss works with their makers had limited time windows 
- an interview with some answers is more valuable than no answer at all, especially in cases 
where practitioners’ memory, willingness or ability to talk is compromised by age or interest. 

3.1 Selecting a work 

Each case-study begins with the selection of a handmade electronic music work. Once 
a work has been selected, an assessment of which materials relating to the piece are available 
must be made. This includes primary sources, such as performance instructions, scores, 
recordings, documentation of past performances, and interviews with the people involved 
in its origin. It also includes secondary sources, which consist of any commentary on these 
elements. Tertiary documents - commentary on commentary - tend to also be useful to 
identify and track more direct information about a work. 

In the context of this dissertation, the data collection phase tends to generate two 
general categories of information: information about the system- the elements which make 
the work electronic- and information about the piece which makes the work music. 

This is the terminology I’ll use throughout this document: a work refers to the totality 
of the object of study; a system is the code, circuits, interfaces and devices; a piece is the 
musical result with which the system is used. 

There is no consistent correlation between work, systems and pieces. Voyager , a system 
developed since the late 1980s by composer, improviser and professor George Lewis, has 
resulted in a large number of recordings, performances, and iterations of the system (Lewis 
2000). Yet, regardless of the number of pieces and works, it remains recognizable as Lewis’ 
semi-independent improvising computer music system. Conversely, Tristan Perich’s 2010 
One Bit Symphony is a combination of hardware and software forming a system which can 
only perform one eponymous piece: together this system and piece form Perich’s work. 

Returning to the process of selecting a case-study, this process might be paused if 
there is no information available on the system, or if essential aspects of it are unknown. For 
example, my study of Salvatore Martirano’s “Construction” was effectively dormant until I 
was able to obtain a copy of the dissertation pertaining to the construction of the system 
(Franco 1974). Similarly, my investigation of Charles Dodge’s 1970 work Earth’s Magnetic 
Field , which is based on a large amount of experiments using early computer music software, 


35 


revealed that I am missing two essential pieces of information: the original code repository 
Dodge used to collect all these experiments, organizing them in a specific structure which 
supposedly reflect this chronological process of experimentation; and a notebook, where he 
collected notes on each iteration which would have been out of place in computer code. 

In other words, the selection of a work viable for the type of study I propose here is 
inseparable from the collection of materials about the work. Here my goal is to understand 
the system well enough to be able to explain the majority of its behaviors and assess the 
origin of those behaviors on a spectrum from fully intentional and defined to the result of 
improvised experimentation and exploration. In other words, my objective is to appreciate 
in as detailed a way as possible the amount of work done in a handmade electronic music 
work, and its origin. The selection of work can only move forward once documents providing 
a reasonable expectation that this is possible have been secured. 

3.2 Analysis of the system 

Once these documents, whatever form they might take, have been collected, an assess¬ 
ment of the systems’ functionality can be undertaken. Inspired by the “divide and conquer” 
approach common in circuit analysis, the behavior of the system is outlined and educated 
guesses are made about which fragments of the system (sub-systems) are responsible for each 
behavior (Rekoff 1985). Listening to recordings of the system in operation, as well as of pieces 
using the system, can be particularly useful, this is also where quantitative analysis can be 
part of the larger qualitative project. As I’ll discuss below, developing a digital model of Paul 
DeMarinis’ Pygmy Gamelan system/piece required a spectrographic analysis of a recording 
of the original system because the electrical schematic publicly shared by DeMarinis lacked 
some crucial component values which set the resonant frequency of the filters around which 
the piece is effectively based. 11 That spectrographic analysis provided the center frequency 
and resonance factor of each filter. Alternatively, I could have attempted to obtain one of the 
original devices (since DeMarinis states some are still in existence) and measured the values 
directly. Regardless, these quantitative methods tend to be in the service of a higher level 
qualitative analysis, the reiteration of the poetic experience provided by the combination of 
artwork and technical system. 

11. As discussed in Chapter 5, this is intentional: the values are left open because multiple devices were 
made, each with different components and therefore, tunings. 


36 


This is usually done through the establishment of a diagram which summarizes the 
principal components of a system and their role within it (Parikka 2011). In some cases, 
the documentation obtained in the previous step will include a functional diagram, but it 
is possible to deduce one from a variety of other documents if not. For example, circuit 
diagrams tend to be separable in a series of partial diagrams corresponding to well-known 
configurations of basic electrical components. Drawing from my master’s thesis, the “Devil 
Robot” effects unit shows clearly a set of components dedicated to greatly amplifying the 
incoming signal followed by another set of components which divide the frequencies present 
in the distorted signal by various powers of 2 (from 2 to 4096). In a musical context, we 
know this means the first sub-system distorts the signals, while the second generates a copy 
of the distorted signal, one and two octaves below (Teboul 2015, 47), further distorted by 
the clock divider’s preference for pulses. 12 If the circuit diagram is simple enough, it can 
effectively act as a functional diagram as well, because the reader will be familiar enough 
with each abstract construct to identify them as these well-known arrangements. In this 
document, I’ll refer to such arrangements as circuit topologies, a common term in circuit 
analysis (Horowitz 2015). 

3.2.1 Canonical and vernacular circuit topologies: identifying connections 

“Well-known” deserves to be clarified before moving ahead. In practice, deducing the 
origin of each functionality in an overall system and its contribution to the individuality 
of a handmade electronic music work can be difficult if the proper references haven’t been 
identified. Dan Snazelle, a musician and owner of the audio electronics brand Snazzy FX, 
reminds us that few new circuits are entirely original in the sense that most are variations on 
standard circuit topologies adapted for particular needs by modifying a subset of components 
and values thereof (Teboul 2015, 151). In that sense, our understanding of a circuit is often 
dependent on the amount of historical information available about it. 

This information can come from classic circuit theory (Rekoff 1989; Robbins and Miller 
2012). The number of “example circuits” produced for engineering textbooks since the emer¬ 
gence of educational materials on the topic is extremely varied and many have been modified 
to serve in electronic music devices. Eric Barbour, owner and designer of the vacuum tube- 

12. For a more detailed description of the analog behavior of digital counter integrated circuits in musical 
contexts such as this one, see Werner and Sanganeria (2013, 4). 


37 


based modular synthesizer brand Metasonix, partially designed his modules with ideas that 
he “literally stumbled over, after years of looking for 60-year-old books.” 13 

In other cases, references are made to prior audio systems and audio circuit design. 
Here, there are two principal sub-cases: prototypes by the same author (e.g. earlier ver¬ 
sion of the same design or earlier designs that inspire the current work); and those made 
by previous experimenters, as in the case of the Devil Robot mentioned above. There, de¬ 
signers and circuit makers Louise and Ben Hinz worked from a design by Devi Ever, itself 
inspired by pre-existing circuits like Collins’ “Low-Rider” octave effect (Collins 2006, 159) 
and preamplification circuit (155) the latter itself based on one of Craig Anderton’s circuits 
(Anderton 1980, 173). 14 Assessing the origin of a particular circuit and working from the 
proper comparison, whether it’s “canonical” or “vernacular” has relied up to this point on 
tacit knowledge of databases, resources and personal collections. Knowing where to look and 
who to ask is often as important as being able to do the analysis. 

This issue of putting together the history of a specific system is exacerbated by the 
chaotic, distributed, impermanent and personal nature of the digital sharing tools which have 
taken over handmade electronic music as a practice. Prior to the pre-eminence of online re¬ 
sources, print and in-person instruction created somewhat messy conditions which pre-figure 
this chaotic sharing of ideas. David Tudor, for example, worked primarily from commercial 
devices, ideas provided by his collaborators such as Gordon Munnna, and popular electron¬ 
ics magazines of various kinds. What is left of his collection of reference materials can be 
examined at the Tudor archive at the Getty Research Library in Los Angeles - a resource 
which forms the basis for You Nakai’s groundbreaking analysis of Tudor’s music in relation to 
his circuit work (Nakai 2017b). Building on this, today’s contemporary practitioners might 
use a combination of books such as Handmade Electronic Music and research articles on 
Tudor, but also blog posts, forum discussions and Google image searches for various types of 
schematics. This fragmentation of written sources via digital means has only accelerated the 
proliferation of devices and the formations of “imagined communities” around the practice 
of handmade electronics (Anderson 2006). The mechanics of this phenomena are outside of 
the scope of the current document - for the moment, I’ll resume my tactics for the proper 
contextualization of each system as such: the study of a handmade electronic music work 

13. Email communication with the author, 10/24/2010. 

14. For a more detailed discussion of this, see Teboul 2015, chapter 3. 


38 


should include an investigation of the context in which the work was done. Potential col¬ 
laborators and influences should be listed and verified as possible via interviews with the 
maker. If the maker is not able to provide this information for any reason, attention to the 
stylistic markers in the documents produced during the development of the work (such as 
the identification of arbitrary component values in schematics, or of unique patterns in com¬ 
puter code) can indicate with various degrees of certainty the possibility of cross-pollination 
of ideas. However, there is no connection as easily established as the one expressed by the 
maker themselves: as such, communication with them whenever possible is invaluable in the 
context of understanding of the genesis of handmade electronic music projects. 

3.2.2 Extending this analysis to computer code in musical works 

One of the original motivations for this dissertation project was Georgina Born’s ad¬ 
mission that the computer code produced at IRCAM would have been a useful source of 
information in her anthropological study of the institution, but that her relative lack of tools 
to engage with this medium prevented investigation in that domain (Born 1995, 8). Consid¬ 
ering the prevalence of code in modern electronic music works, here I outline the parallels 
between the study of circuits and computer code which enable us to consider such code in 
detail in the study of an electronic music work. 

Just as with other technical abstractions such as circuits schematics, computer pro¬ 
gramming languages tend to have a set of canonical uses outlined in manuals, manufacturer 
specifications, and other documents produced by the designing engineers. Building on top 
of this documentation, a wide range of user-produced codebases and implementations are 
available, with the similar development that digital communication tools have made these 
much easier to share. These past canonical and vernacular uses of computer code’s musical 
potential can be used to better understand a work of digital music one wishes to study. 

These codebases have affordances and constraints unique to them and different from 
circuit schematics or assembly diagrams. (Born 1999). Because electronics circuits and the 
diagrams used to describe them stem from signal processing, graphical representations of 
circuits tend to prioritize renderings that make it possible to understand those signal flows. 
Contrasting with this paradigm of description as separate from the actual technical object, 
computer code is unique in the sense that source code (often augmented with comments) is 
often both the representation of the system and its implementation. That is, with proper 


39 


training, computer code can be read both to understand it semantically and execute it. Both 
paradigms contain stylistic or arbitrary elements which can be leveraged in the establishment 
of possible connection between the work being studied and works surrounding it. This is, 
to a certain extent, what science and technology studies scholar Basile Zimmerman does 
in his research on experimental musicians in China (Zimmermann 2015, 125-148). There, 
he outlines the process by which MaxMSP patches made by him and his collaborators are 
inspired by and simultaneously extend or contradict a variety of inspiration sources. In a way, 
the method offered here is a formalization of the epistemological processes of identification 
comparison, analysis and extension implicitly discussed by Zimmerman. 

3.2.3 Interfaces 

Interfaces offer an interesting additional level at which to consider the design of a 
system as it enables the composition and performance of a piece. Particularly in the case 
of handmade electronic music, where the system, including the interface, may be modified 
in real time as part of the piece (see for example Ralph Jones’ 1976 Star Networks at the 
Singing Point), the design of the interface reflects a particularly crucial aspect of the original 
maker’s work: how it might be interacted with, if at all (R. Jones 2004). 

Wendy Chun’s wide-ranging conception of interfaces as “mediators between the visible 
and the invisible, as a means of navigation [which has] been key to creating ‘informed’ 
individuals who can overcome the the chaos of the global capitalist system” (Chun 2011, 
8) is usefully complemented by Lori Emerson’s perspective on this connecting layer: “the 
interface is equal parts user and machine, so that the extent to which the interface is designed 
to mask its underlying machine-based processes for the sake of the user is the extent to which 
these same users are disempowered, as they are unable to understand— let alone actively 
create—using the [system].” (Emerson 2014, 47) 

Interfaces can connect the user to all levels of operation of a system. A knob or a 
switch might affect one parameter just like it might entirely change every parameter in the 
system, especially in digital systems. Active creation using the system is rarely as obviously 
hindered or encouraged as in the interface. In that sense a particular attention to the design 
of the interface in handmade electronic music work should clarify any decisions are inherited 
from prior works, influenced by the inner workings of the systems, or entirely intentional. 

For example, in Nicolas Collins’ 2009 piece In Memoriam Michel Waisvisz, the light 


40 


from a candle controls the behavior of a series of interconnected square wave oscillators. The 
candle is placed on the circuit board, between photo-resistors which connect the intensity of 
the light produced by the candle to the timbre, volume and pitch of the oscillators. A player 
interacts with the system by walking away from the small, battery powered system (which fits 
in an Altoid tin) and directing a small battery powered fan towards the circuit and candle. 
Because of the usually large distance between the small fan and the small candle, the erratic 
movements of air in the room combine with the small but consistent motion provided by the 
fan and to make a difficult to control system. Together, this challenge and the motions of 
the player in reaction to it defined the piece at hand. In other words, the circuit built by 
Collins for this piece is a standard example in the Handmade Electronic Music book that’s 
been adapted to the occasion of a lost friend (Michel Waisvisz passed in 2008) to implement 
an indeterminate and almost life-like homage. The interface of the system directly embodies 
and realizes this intention and attitude. 

Together, electronics, computer code and interfaces offer fruitful starting points for the 
analysis of handmade electronic music works if we take the technology which prompts this 
music on its own terms. However, remembering the perspective of media theorist Darren 
Wershler from the previous chapter: “there is no way to abstract the study of signal processing 
from its cultural articulations that doesn’t irreparably compromise research.” (Wershler 
2016) In that sense, the complement to these analyses must be a technosocial assessment of 
the maker’s context and internal dialog, to the extent that such an assessment can be made. 

3.3 Communication with makers 

This component of the research is once again informed by Born’s anthropological re¬ 
search at IRC AM (Born 1995, 1999). Following her approach of following the interactions of 
individuals as part of a wide ecosystem of institutional, personal, and material forces, I have 
approached the assessment of personal perspectives as essential to a better understanding of 
handmade electronic music works. 

For this step I had originally developed a set of 10 predetermined questions, centered 
around five themes as detailed below. Although I conducted a number of interviews according 
to this script, later interviews followed the same themes but had questions more directly 
informed by the close study of a specific work. Interviews ranged from thirty minutes to 
two hours, depending on interest, and were often followed with extensive email or in person 


41 


contact. The questions are based on a set of topics initially tested for my master’s thesis. 
They are meant to enable some comparisons across case studies. Participants were welcome 
to decline a question and elaborate on what seems most interesting to them. In most cases, 
the case-study was reviewed by the author of the work discussed, sometimes leading to 
adjustments, edits and expansions. 

Technology: 

• what is the current place of hardware design in your practice? 

• What parallels do you draw between technical processes and other disciplines or hob¬ 
bies? 

Limitations: 

• When designing audio hardware, how do you approach challenges and limitations? 

• How have proprietary designs / tools and planned obsolescence affected your practice? 
Community: 

• How important have other people been in the technical development of your practice? 

• What is your current professional community like ? 

Ethos: 

• To what extent do you identify with a tradition of use, re-use, misuse or subversion of 
technology for the arts? 

• Do you feel compelled to help people with the same design questions you had when 
you started? 

Music: 

• How close do you feel to avant-garde or experimental music traditions? 

• How has your awareness of those traditions influenced your hardware work? 



42 


The goal of this method is to identify elements of a work which are intentionally 
established by the maker while understanding the processes that lead to the more ad-hoc 
development of the other elements, the identification of the source and nature of work that 
builds up to form works. An analysis of the system may allow one to make very well educated 
guesses about this, but no assessment is as well established as one confirmed by the maker of 
the work: an email containing the questions necessary to establish such connections can often 
suffice to make significant progress. For example, in my interview with the U.S. composer 
Philip White, he detailed the extent to which the development of his “feedback instrument” 
led to an early period of experimentation assessing the compositional possibilities of the 
system, resulting in a number of pieces which are now available in commercial recordings. 
However, breaking with the narrative that his practice was purely constituted of such an 
exploration, his more recent works return the instrument in the context of his practice where 
it serves pieces not framed exclusively by the exploration of the system. 

3.4 Re-interpretation: “covers” 

In the process of this dissertation I have been enthusiastically making reproductions 
of past, partially documented systems such as the ones discussed in chapter five. These 
reproductions are made with the purpose of re-creating a specific musical behavior or specific 
musical affordances. In trying to name this aspect of my case-studies, I considered the 
following terms: re-creation, re-invention and reiteration. 

Re-creation appeared to me as too self-congratulatory. Although I do claim some level 
of originality, the goal of the case-study is to get closer to the original musical affordances 
of a given system, and therefore re-creation appeared as implying too much of a transferral 
of ownership which might erase the extent to which each project was a collaboration as 
mediated by the materials and documentation available regarding a specific work. 

Similarly, the term re-invention carries baggage relating to patent law vocabulary, 
specifically to the concept of “independent invention,” where two inventors simultaneously 
develop similar or identical works. This felt closer in terms of the technical work often at 
hand in my project, nevertheless, I also felt like it erased my attempt at offering a reasonably 
close user experience to the original. 

Re-iteration, therefore felt somewhat appropriate. Although this word was used in early 
drafts of this dissertation, I remembered a conversation I had with an audience member 



43 


after my performance of Pulse Music and Pulse Music Variation at the Arete Gallery in 
Greenpoint N.Y. Upon being asked if I “did this often,” I replied that “no, I don’t usually 
do covers.” Although I did not mean this as a joke, the contrast between the context of new 
music performances in which Steve Reich’s music usually occurs and the popular implication 
of the term “cover” made my response pass for humor. 

A cover is defined in the Grove Music Online Dictionary as “term used in the popular 
music industry usually for a recording of a particular song by performers other than those 
responsible for the original recorded version; it may also be applied to a re-recording of a 
song by the original performers (generally using pseudonyms) for a rival record company.” 
(Witmer and Marks 2001) On some levels, it perfectly describes the attitude with which 
I approached my playing of the original works: covers can be very close or very far from 
the original, played in the style of a third artist, adapted for various political or artistic 
purposes, etc. On another, it collapses the traditionally un-popular tendency for handmade 
electronic music to operate on the fringes, even of contemporary classical or experimental 
music. “Cover” echoes Gordon Mumma and Reed Ghazala’s use of the term “folk” (Ghazala 
2004; Mumma 1974), more recently seen in Digbee’s electronic instruments book (Digbee 
2019). “Cover” implies a tacit agreement that good ideas can find life in new forms generated 
by the fluid mechanisms of popular culture. I will therefore discuss some of my re-iterations 
of these various works interchangeably as “covers.” 

3.5 Production of documentation 

As mentioned above, these aspects of the case studies tend, in practice, to overlap 
significantly. These overlaps are inherently contradictory with the linear narratives preferred 
in writing in general and academic writing more specifically. In this dissertation, I have 
done my best to produce coherent accounts of each study which illustrate not just the facts 
produced by this approach to the shape-shifting topic of handmade electronic music, but 
also provide the documentation necessary for future re-iterations, or, as I call them, covers. 

This is particularly true with the documentation of my own works, which make evident 
the extent to which even thorough documentation of digital works can still obscure some 
important aspects of the making process. Because there is a significant amount of technical 
experimentation, and because it tends to be difficult to predict the success of a particular 
experiment, some of the sub-processes that lead to the work as it is documented may not 


44 


be archived as would have been preferable in retrospect. I will elaborate on this inherent 
contradiction in the production of documentation across the rest of this dissertation. 

3.6 Example 

To clarify the practicality of each of these tasks, and how they inevitably overlap in un¬ 
predictable ways, we can take a brief look at Paul DeMarinis’ Pygmy Gamelan. This project 
was informed by regular contact with the author, who greatly assisted in an understanding 
of the work even though my study of the system was fairly independent. My interest in this 
particular work was prompted not by the chapter by Fred Turner describing the piece in 
Ingrid Beirer’s DeMarinis retrospective book from 2010, but rather, the fact that at some 
point in 2018 DeMarinis decided to post a version of the schematic used for the first version 
of the circuit on his public webpage (Turner 2010; Beirer 2011; DeMarinis 2016a). 

The schematic details two radio antennas which feed a signal to two integrated circuits, 
which are bit-shifting registers. Bit shifting is a process by which a 0 or 1 (a bit) is written 
to memory, then pushed down a series of memory slots as new values are written to the top 
slot. The rate at which new bits are written and pushed down is controlled by an electronic 
clock, which produces clearly defined cycles of high and low voltages. The shift registers in 
DeMarinis’ circuit, as detailed by the schematic, have 8 slots of memory, making them 8 bit 
shift registers because they always have 8 bits in memory. If at the next clock cycle one of 
the antenna picks up a loud enough signal, the input bit for that clock cycle will be set to “1” 
and that bit will step through the register until it is discarded after the 8th step. Five of the 
step outputs are connected to band-pass filters. Although one could calculate the resonance 
factor and the center frequency of this filter from a properly annotated schematic with all 
the necessary values, the schematic DeMarinis shared does not include these. 

Luckily enough DeMarinis did share a 1978 recording of another piece, Forest Booties, 
which features the ringing bleeps of the Pygmy Gamelan circuit extensively and noticeably. 
A Fourier analysis of the recording of the piece indicates five peaks corresponding to the 
sound of a positive bit hitting these band-pass Liters and ringing accordingly. 

A quantitative tool such as Fourier analysis (which calculates how much of each fre¬ 
quency in the audio spectrum is present in a recording) contributes to the qualitative under¬ 
standing of the system, and therefore, the piece. In this dissertation, quantitative analysis 
informs a qualitative understanding of the work. In the case of Pygmy Gamelan, the system 


45 


does not require user input after being powered on (even though it includes touch plates 
for doing so if desired): it is an example of a work where the piece and system are almost 
equivalent. It does not need any interaction to be performed. Somewhat like Steve Reich’s 
1967 Pendulum Music, Pygmy Gamelan only needs to be plugged in to start and its power 
cut to end. 

The digital model of the circuit I assembled on the basis of the schematic shared 
by DeMarinis and this quantitative analysis can be said to be a digital rendering of the 
original work which shows that the underlying mechanisms of the original work have been 
understood. Furthermore, as it sounds virtually identical to what is audible in Forest Booties, 
it is a relatively successful case-study of the piece based on the criteria detailed above. 

However, further emailing with Paul DeMarinis shows that this was only one of two 
versions of the piece built. DeMarinis describes the second set of six units he built in 1976 
for the Composers Inside Electronics show at the Musee Galliera in Paris, France: “Each unit 
has a unique pitch set and different rhythmic scheme, the second edition had much more 
varied logic sequencers beyond the TTL shift register, including one that used the RCA 
CMOS 1-bit microprocessor!” 15 In that sense, the study of the work revealed the extent to 
which it had been incompletely documented. 16 

Let us briefly return to Chikofsky and Cross’ summary of the purpose of reverse en¬ 
gineering: “increase the overall comprehensibility of the system for both maintenance and 
new development.” (Chikofsky and Cross 1990, 16) In a sense, the first purpose of this 
dissertation is to show that by thoroughly assessing our understanding of a particular mu¬ 
sical system, we can, in case the system is not available for literal maintenance, potentially 
convert maintenance of a particular artifact into a maintenance of the poetic experience of 
that artifact in use, through the development of replacement systems. The second purpose 
is to exemplify some of the shapes “new developments” might take, and how to document 
them. Practice-based theory, then, isn’t simply a choice, it is a necessity: indeed, one cannot 
do justice to the poetic experiences provided by these artifacts without experiencing them. 

15. email with the author dated 6/4/2019. 

16. Although, as DeMarinis points out in his review of this section: “Is there any imperative that a work of 
art be documented technically? I think the wonderful point of your thesis is that technical information, when 
required for the understanding of a work of art, can be gained forensically post-facto” (email with the author, 
5/7/2020). A discussion of the value of documentation versus that of the work of forensic investigation is 
out of the scope of this dissertation, but it certainly poses a clear choice for the artist: document process, 
or let a work be examined in the future on the basis of its material conditions of survival? 


46 


Using the terminology developed by sociologists of technology Harry Collins and Robert 
Evans, knowledge for these types of systems requires both contributory and interactional 
expertise (Collins and Evans 2002, 244). To conclude, consider the argument made by Jussi 
Parikka and Garnet Hertz in their landmark article, “Zombie Media: Circuit Bending Media 
Archaeology into an Art Method.” (Hertz and Parikka 2012) There, the authors outline 
circuit bending as a form “non-expert” knowledge which turned the exploration of obsolete 
consumer electronics into the basis, if not the embodiment, of an art practice. Connect¬ 
ing this perspective, reminiscent of Reed Ghazala’s in “Alien Instruments,” with Nicolas 
Collins’ approach in “Handmade Electronic Music,” this dissertation frames the analysis of 
past technical systems and their social context as an epistemological and acoustemological 
project which transforms repeated non-expert bending into a hybrid form of artistic mastery, 
occasionally accompanied by a more traditional understanding of knowledge of materials. 


CHAPTER 4 
Foundation Work 


Prior to moving on to new case studies there are a number of my own preliminary and 
peripheral publications which form a coherent whole with the present document. In this 
chapter I map these out in order to present the current work as one part of a longer arc 
concerned with the mechanisms by which humans co-produce instruments, musical works, 
and social groups from their technological everyday. 

As stated previously a large part of this project was motivated by my master’s thesis, 
which I consider a preliminary investigation for this dissertation (Teboul 2015). My thesis 
presented some aspects of circuit and code analysis as combined with interviews that prefigure 
the method outlined here in chapters two and three. More specifically, it focused on adapting 
the concept of post-optimality, the productive user-unfriendliness of technical systems, as an 
alternate lens through which to view the development of electronic music systems (Dunne 
2005, xvii). Although it was originally coined in the context of design theory, something 
like post-optimality is useful to name the mechanism by which people appreciate the use 
of inconvenient tools for an artistic purpose. It highlights the non-measurable nature of 
some poetic experiences, and the ways in which some objects seem to encourage or catalyze 
these experiences regardless of their objective quality as objects. Post-optimality is a way 
to discuss the phenomena by which people would rather carry an amplifier or synthesizer 
ten times as heavy as a laptop to their performance because of the slightly more familiar 
combination of overtones it produces and the user experience it offers. 

Post-optimality has a privileged connection with open source design practices and diy 
culture, which has repercussion on how to write histories of these cultural production pro¬ 
cesses (Teboul 2018). Indeed, if the post-optimality of what I now call handmade electronic 
music systems means that arbitrary decisions have a heightened importance, which is visible 
in various generations of implementations by a diversity of authors, then archival work must 
contend with the chaotic nature of the sharing platforms used to propagate these arbitrary 
constructs. 

Dealing with this chaotic nature of the practice of handmade electronic music is espe¬ 
cially important because a close technical look at the milestones of various diy fields reveals 


47 


48 


the extent to which arbitrary decisions and serendipitous discoveries decide the course of 
history. Prior to the development of a global supplying and consuming chain for electronics, 
most or all of the electrical devices would today be called diy: a critical and technically 
literate look at electronics research in and out of audio technologies allows us to understand 
that the socio-technical context of any so-called development often demystifies the process of 
invention, distinguishing accidents from intentional work towards an invention. Against the 
figure of the genius solitary inventor, traditionally coded male, white, and in power, we find 
a collection of heterogeneous tinkerers all stumbling between their ideas and usually trying 
to profit off of them, at least enough to satisfy their tinkering habits. This re-contextualizes 
hacking: if there is no objective measure of good or smart, then transgression is not much 
more than simple experimentation outside of what has already been done (Teboul 2017b). 
This nuancing of the genius is a re-contextualization of expertise: if there is no correlation 
between a good musical performance and instrumental skill, then perhaps the traditional 
assessment of “playing music” can be recognized as being more in the ear of the listener than 
in the intention of the player. 

The various historical denominations of diy (hacking, bending, tinkering, making, 
thinkering, etc.) and their relationships to compositions were clarified in a book chapter 
for the Bloomsbury Handbook of Sonic Methodologies (Teboul 2020b). More specifically, 
that chapter discussed the relationship between two principal proponents of hardware exper¬ 
imentation in electronic music, Reed Ghazala and Nicolas Collins. As previously discussed, 
the former’s practice of “circuit bending” can be summarized as the modification of pre¬ 
existing electronic devices and waste into new instruments rather than the development of 
the latter from mostly new parts. Ghazala expresses a positive relationship with ignorance 
of the internal mechanisms, positing “theory-true” knowledge as not necessary even if not 
necessarily detrimental (Ghazala 2004, 2005). Collins, developing that, offers some circum¬ 
stances where knowing how to wire a new integrated circuit or additional control can save 
a certain amount of effort and help maintain motivation, although that should hardly be an 
essential condition to the practices. His conception of hardware hacking is not contradictory 
to circuit-bending, rather, circuit-bending can be seen as a subset of hardware hacking de¬ 
veloped independently by Ghazala. Furthermore, my chapter also outlined how hacking can 
be applied to various mediums: mechanical, electrical, and ideological, without major philo¬ 
sophical departures. In other words, paraphrasing media theorist Mackenzie Wark, hacking 


49 


as a mode of composition makes the new come out of the old - any and all of it. (Wark 
2004). 

Elaborating on the ideological nature of hacking and handmade electronic music more 
generally, the current vitality of the practice in North America and Europe is covered in 
my contribution to the online section of the third edition of Handmade Electronic Music 
(Teboul 2020a). Complementing an ongoing project dedicated to documenting diversity in 
the handmade electronic music community, 1 update Collins’ original review of the held, 
which concluded the second edition of his book (Collins 2006) to suggest both trends in the 
sliver of the community 1 am privy to, and that the practice perpetuates its fragmented 
tradition (Teboul 2017a). In a 2015 presentation at the Alternative Histories of Electronic 
Music conference, given at the Science Museum in London, I had put forward the concept of 
“component-level” analysis of electronic music. Over the course of my dissertation project, 
discussion with Curtis Bahn, himself a lifelong participant in computer and electroacoustic 
sound and music, made the valuable point that not all works were designed at the component 
level and therefore although an analysis method for handmade electronic music would need 
to be able to engage with the component if that was were expessivity was to be found, it 
was misleading to label the entire project as such. This led to the more variable geometry 
of the method presented in chapter 3. This variable geometry and mixed methods were 
presented at the 2019 meeting of the Society for the History of Technology, where I discussed 
how Manuel DeLanda’s development of the assemblage as a “concept with knobs” helps 
us grapple with these fragmented, nonlinear histories which the method developed in the 
previous chapter helps us extract from artifacts, archives and experiences (DeLanda 1997, 
2016). There, discussed Siegfried Zielinski’s concept of anarchaelogy, which highlights the 
role of the active practice as a complement to the traditional historical investigation of the 
archive (Zielinski and Winthrop-Young 2015; Striegl and Emerson 2019). Then I showed 
it might complement Thor Magnusson’s rhizomatic view of organology, in which electronic 
music instruments might be associated to each other in entirely ad-hoc, user constructed 
maps not bound to any sort of overarching logic, in contrast to the traditionally hierarchical 
nature of western organology (Magnusson 2017; Gnoli 2006). True to DeLanda’s “concept 
with knobs” I proposed that a proper assessment of any instrumental system is best served 
by an oscillation between one setting and its negative, between an ad-hoc assessment of the 
system within the history of all other systems and as a starting point for both archival, 


50 


hierarchizing research and generative, anarchival production. 

In that sense it is important for me to acknowledge the original artistic motivation 
behind both my master’s thesis and the present dissertation. This project began with a 
concern for access in the so-called democratization brought on by cheap computer and diy 
digital electronic music technologies. Did these resources - material and intellectual - really 
serve to include people not previously part of the media arts community, or did it just allow 
grad school students and professors to spend more money on festival air flights because they 
no longer needed to buy Mac Minis for installations? 

My intuition was and is for the latter. There are accessibility and educational projects 
which should be celebrated, but there are also biases embedded in our very ideas of what 
an instrument, music, and art should be and using empowering technologies to make and 
maintain oppressive power structure sonly perpetuates a status quo: the maker movement 
is primarily a cash cow for O’Reilly Media, a recruitment talking point for tech universities, 
and jobs for people who sell the parts and tools or run the communal workshops. Computer 
music and electroacoustic music, both as academic disciplines and communities, remain 
heavily gendered, classed and raced domains (Born and Wilkie 2015). 

What is to be done? My response is to think of how to address those conceptions 
of instrument making that maintain oppressive structures. Thinking widely about music 
history, I am interested in musical practices which rely on local resources, which include 
materials, expertise, and the universal tendency to make something of one’s free time. 

What would local resources and expertise look like for electronic music, where the 
hegemony of western culture is already an uncomfortable given? The “locality” of electronic- 
sound-making components available to some is an arbitrary and blurry concept - just look at 
the shape-shifting output of Hannah Perner-Wilson’s Traces With Origin project. 17 “Local 
electronics” might be interpreted in our modern reality as taking an autobiographical / auto- 
ethnographic aspect. As a voluntary, self-inflicted label, it could mean whatever the maker 
thought was fair / accessible to them, materially and intellectually. For me, it meant trying to 

17. A set of speculative design projects by Hannah Perner-Wilson with various collaborators, where “A 
parallel universe that separated from ours roughly 200 years ago, around 1800 when electricity was being 
discovered and the first electronic devices were being invented. In this universe communities made a point 
of developing local electronics that drew upon on regional resources and production methods. The design 
and function of these electronics were strongly influenced by cultural tradition and ritual. As cultural goods, 
these electronic devices have quaintness to their functionality, a property not existing our own universe.” 
(Perner-Wilson 2015) 


51 


not order anything online, and only buy new materials or devices accessible within walking 
distance and for less than a few dollars. From these primary materials - nails, potatoes, 
musical postcard or electric candles - the question then became: there is an optimal, musical 
arrangement here, what is it? 

Before I had built anything, this performative conception of a design process had, to 
me, a clear corollary: there was no way I could ever build any musical electronics from 
“whatever I had” without making something basically about to fall apart, and then falling 
apart most of the time. However, that prospect was exciting: I felt musical arrangements of 
these components had, foreshadowing my discovery of Tudor’s work, “something to say,” and 
they had to say it quick before that decaying process destroyed it. In fact, that would be 
the only way to pull this trick off: if I wanted to make electronic music from almost nothing, 
I needed to let it do whatever that day’s assemblage wanted to do. Because of its unstable 
nature, it had to be about its internal potentials. More precisely, the right arrangement 
was the one that communicated to a listener the most about the internal nature not just 
of the components, but also of all the interesting musical interactions between the compo¬ 
nents. Performatively, this had an equally arbitrary-but-seemingly-inevitable consequence: 
I needed to build and destroy the instrument in front of the audience. What better way to 
illustrate, through sound, the potential and message not just of each component, but of each 
combination of component? 

Three years before Haddad, Xiao, Machover and Paradiso’s article of the same name, 
I dubbed these bricolages “fragile instruments”, an homage to Japanese architect Sou Fu- 
jimoto’s conception of a fragile architecture, where “the balancing of disparate elements to 
make an order that incorporates uncertainty,” (Nishizawa 2010; Haddad et al. 2017). 18 I 
set out to build various experiments to work towards my goal of fully fragile instruments: 
electronic music devices that would be built entirely out of what I conceived to be “local” 
and “accessible.” I would then document the results. Due to my personal reading of “local” 
in this context, the instructions would describe something closer to the “concept” of an in¬ 
strument - which physical phenomena the piece relied on, valid anywhere in this reality - 
rather than a recipe requiring the same ingredients. Others could then transfer that concept 
and make new iterations of the work based on the underlying ideas and physical phenomena. 

18. For this reference I must thank Shannon Werle, my classmate in my time at Dartmouth College’s 
Digital Musics master’s program. 


52 


Early artistic implementations of this work include some unpublished pieces such as Circuit 
Hammering or Celldrone # and published works such as Pop Rock (Teboul 2020c). 

In this process, there was and remains a major roadblock: method. There is no system¬ 
atic precedent for technical analysis of electronic artifacts in musical and historical contexts. 
This realization came in the process of a side project for this concept, which involved un¬ 
derstanding the cultural and political baggage of each “local” electrical thing I used. This 
is an equally important aspect of their influence, of their weight in the overall system. To 
properly document a piece, it was essential I review the cultural and technical meaning 
of each component. It also seemed important I document in a corresponding fashion any 
meaningful combination of components, of any computer code involved, and of any interface 
used to control these systems. Understanding the cultural implications of technical decisions 
would become the sole focus of my masters thesis. I set out to understand the implications 
of attempting to fix this lack. It was, and up to this day mostly remains a methodological 
question. Furthermore, if there is an acoustemological aspect to the pieces, which can be 
addressed through listening, and an epistemological aspect to the hardware and software 
which can be documented through technical analysis (Rheinberger 1994; Knorr-Cetina 1999; 
Magnusson 2009; Feld 2015), the process of making an electronic music instrument remains 
overall a lossy encoding process: information about the work is lost in the artefact of the 
work. This compressive phenomena is aggravated by aging, which is constant and starts 
immediately. My conclusion was that in addition to the necessary development of such a 
technical analysis method of devices in modern music, interviews are also essential to an un¬ 
derstanding of individual instrument building projects. Any technical analysis and historical 
work had to imply ethnographic work, and rely on people’s memory of the making process 
if the latter had already passed. 

The development of a synthesizer which prompts user to think of its own history 
and the labor involved in its making is the basis for my participation in Professor James 
Malazita’s “Tactical Humanities Laboratory” at Rensselaer Polytechnic Institute, where I 
collaborated with undergraduate students Caoilin Ramsay and Emily Yang to develop what 
we called an “arbitrary waveform generator”. A summary for my work there, which elaborates 
on this concept of making as a lossy encoding process, is included in a collaborative piece 
co-authored with Malazita and Science and Technology doctoral candidate Hined Rafeh 
(Malazita, Teboul, and Rafeh 2020). 


53 


Just as this line of inquiry could be seen as the groundwork for future musical projects, 
I also see it as the basis for future research endeavors. The case-studies I originally meant 
to implement, such as George Lewis’ Voyager or the work composed on Pauline Oliveros’ 
diy Buchla modular system, were meant to be the actual object of focus. Nevertheless, the 
case-studies detailed in the next chapter, although they began as proof of concepts to know 
I was ready to deal with works of Oliveros’ and Lewis’ magnitude, revealed themselves as 
complex enough to constitute the analytical portion of this dissertation. Further reflection 
on this is detailed in chapters six (results) and seven (conclusion). 



CHAPTER 5 
Case Studies 


The case studies presented here are applications of the methodology outlined in chapter 
three of this dissertation, that is, a technosocial form of reverse engineering. Although 
drafts of the method were written before these studies were undertaken, the method and 
the studies informed each other extensively. Just like the systems and pieces discussed were 
co-constructed, the method and the studies fed into each other. My theoretical approach 
was expanded by my practical experiments, which were themselves informed by said theory. 

Each of these subsections illustrate the importance of the artifacts and their potential 
as objects of study. They show how technical media can complement traditional scores, texts 
and interviews more common in music studies. They also make clear how re-iterating the 
system might inform an understanding of the piece and the work. In this regard I oscillate 
between a third and first person. My focus shifts as needed between an interpretation of 
existing knowledge and a discussion of the insight provided by this process of “covers” which 
I undertook to test my understanding of such systems, and to make sure the information 
provided here could constitute a reasonable start for future covers and adaptations. 

With the analyses of pre-existing works as well as discussions of my own work, I 
present connections between the authors’ documented intentions (explicit when available, 
deduced when not) and its realization as a technical design and artifact. The nature of the 
connection between each system and each piece is unique: it is the common mechanisms of 
making handmade electronic music works, of which systems and pieces are the by-product, 
which this method highlights and sheds light on. 

5.1 Steve Reich’s 1969 Pulse Music and the “phase shifting pulse 
gate” 

The phase shifting pulse gate is a system built between 1968 and 1969 by U.S. composer 
Steve Reich (born 1936). With the help of Larry Owens and Owen Flooke, two engineers 
from Bell Laboratories, the system was assembled by Reich with “a great deal of help” from 


54 



55 


the engineers (Reich and Hillier 2004, 41). It was used in performance for two pieces: Pulse 
Music and Four Logs. It was effectively abandoned after its second use in performance at 
the Whitney Museum, which featured both pieces. Following the nomenclature outlined 
previously, we can clearly identify the system, the phase shifting pulse gate, and the corre¬ 
sponding pieces Pulse Music and Four Logs. Reich may have considered the phase shifting 
pulse gate a stillborn project after the relative failure of the Whitney Museum performance 
on May 27th, 1969 (41), naming his article on the process “An End to Electronics.” (Reich 
1972) Because he brackets this section of his career so clearly with this article, I will refer 
to the phase shifting pulse gate as the system, focusing on Pulse Music as a corresponding 
piece since Four Logs was not composed specifically for the system. Although quite abstract, 
the work can be thought of as the narrative arc from design to abandonment of the system 
and its corresponding piece. 

5.1.1 Materials available 

The 1972 article cited above was included in Reich’s 2004 collection of writings (Reich 
and Hillier 2004). This serves as the main basis for this case study. There are no recordings 
of the performances or of the system, and the only picture of the device available in the 
article is not adequate for use to gather information about the circuit in the device. 

The article does contain a block diagram of the system (adapted in figure 5.1), as 
well as an almost-traditional staff-notated score and a fairly detailed, if concise, discussion 
of performance and design details. Few materials exist beyond that, apart from a clear 
description of the system by music critic Jeremy Grimshaw: 

With the help of Larry Owens, an electronic engineer at Bell Laboratories, Reich 
designed a machine for realizing incredibly complex rhythmic phase relationships. 

The Phase Shifting Pulse Gate, as they called the device, made no sound of its 
own, but controlled the passage of sounds fed through it to an amplifier. Reich 
specifically composed Pulse Music to utilize the Pulse Gate’s possibilities; this 
piece, as well as Four Logs (also using the Pulse Gate), was introduced in a 
concert in 1969 at the New School in New York. With its application in Pulse 
Music, the Phase Shifting Pulse Gate became a kind of automatic electronic 
organ. The piece utilizes eight pitches in a minor mode that are generated by 
eight different wave oscillators and turned “on” or “off” in rhythm by the Pulse 


56 


Gate. At the opening of the piece, all eight tones are sounded together as a chord. 

The Pulse Gate keeps a steady tempo, but as the piece proceeds, the person at 
the controls of the device gradually offsets the notes from each other, one by one 
and beat by beat, so that the chord is slowly laid out on a horizontal rather than 
vertical plane, the harmony turning into a melodic line. Once the chord is fully 
parsed into melody, its tempo increases until the opposite effect is achieved: the 
notes pass so quickly as to blur back into a chord. Although the effect of the work 
was undoubtedly startling, Reich soon abandoned mechanically derived phasing. 
(Grimshaw 2019) 

Historically, the phase shifting pulse gate system effectively pushed Reich away from 
the electronic works of Come Out , It’s Gonna Rain and Melodica, towards his commitment 
to acoustic phasing, such as that which was explored in his Piano Phase (1967). After 
his dissatisfaction with the phase shifting pulse gate performances of 1969, he focused on 
these acoustic processes. Of the few hundred people that may have heard the phase shifting 
pulse gate in 1969, it is likely that the only one that actually remembers it working is 
Reich. As a transitional piece, it offers helpful context with which to understand both the 
perceived musical affordances of electronics in Reich’s milieu, and Reich’s perspective on 
those affordances. 

5.1.2 Analysis of the system 

Reich’s concept of phase, reified by this machine as well as a number of his other works, 
is as a melodic and rhythmic device that relies on synchronization / de-synchronization at 
the level of the musical phrase, while in engineering parlance signal phase is a timbre shaping 
device that relies on additive and destructive interference at the level of individual frequencies 
interacting with each other within individual oscillation cycles. Understanding this helps us 
better appreciate how arbitrary some of the design decisions embedded in the design of the 
phase shifting pulse gate are. These directly relate to the system’s premature demise as a 
compositional and performative tool. 

In “An End to Electronics,” Reich’s original article from 1973 discussing the phase 
shifting pulse gate, Reich features two diagrams: one of the overall system, and another of 
one of the phase shifting pulse gate’s twelve gate circuits (Reich and Hillier 2004, 39-40). 
If electronic schematics show which components are included in a circuit and how they are 


57 


connected, a block diagram operates at a higher level of abstraction. Each “function” of the 
circuit is represented, usually within a box, with arrows towards that box representing the 
signals necessary for the proper operation of that function, and arrows coming out showing 
the product of that function and its connection to other function-boxes. In that sense, the 
block diagram, also called functional diagram, is perhaps the closest thing to a “black-box” 
representation of technical systems, a common talking point in science and technology studies 
(Winner 1993; Hsu 2007; Von Hilgers 2011) and history of technology (Bissell 2004; Eisler 
2017). A combined diagram, adapted from Reich’s is presented in figure 5.1. 

These diagrams are detailed enough to confirm that the system implements what 
Michael Johnsen calls “bulk-time delay,” which corresponds to Steve Reich’s use of the word 
phase in writings and piece titles, rather than frequency level “signal phase” shifting, from 
engineering parlance. 19 The phase shifting pulse gate operates in a stepped manner, divid¬ 
ing clock cycles into 120 subdivisions. The score included indicates that these clock cycles 
correspond to measures. In other words, Reich designed and built a machine which could 
arrange up to 8 pitches into arbitrary positions at any l/120th of a measure, with a “speed” 
knob to control long a measure was. The arrangement of the pitches is done using a series of 
switches, with one set of twelve controlling decades and another set of ten controlling units. 
Useful information from Reich is included in his reflections on his two performances: 

The “perfection” of rhythmic execution of the gate (or any electronic sequencer or 
rhythmic device) was stiff and unmusical. In any music that depends on a steady 
pulse, as my music does, it is actually tiny micro-variations of that pulse created 
by human beings, playing instruments or singing, that gives life to the music. 

Last, the experience of performing by simply twisting dials instead of using my 
hands and body to actively create the music was not satisfying. All in all, I felt 
that the basic musical ideas underlying the gate were sound, but that they were 
not properly realized in an electronic device (Reich and Hillier 2004, 44). 

This suggests that the rigidity criticized by Reich in his own system was nevertheless 
his intent. Michael Johnsen, a U.S. performer, composer and circuit designer and occasional 
member of Composers Inside Electronics, comments: 

19. email exchange between Michael Johnsen and the author, 5/21/2018. 


58 



Figure 5.1: A transcribed diagram showing the clock mechanism common to the 
entire circuit, one of the twelve gates, and the system’s inputs and 
outputs. Based on Reich and Hillier 2004, 39-40. 



















59 


Reich’s decision to create metered divisions of the master clock [using cascaded 
flip-flops to fine-grid time to a new maximum resolution of 120th notes] is exactly 
what deprived him of the “tiny micro-variations of (...) pulse created by human 
beings” he wanted. He could have had that “human touch” with near-infinite 
resolution had he included resistive analog controls for pulse length instead of 
regimenting time into flip-flopped steps. I think it was his own attachment to the 
idea of musical meter, rather than a “natural” approach to “time” that pushed him 
toward a design he then rejected as mechanical. In other words he superimposed 
a “digital” idea about time on a medium that was inherently analog. In fact 
an analog realization could have given him the swing he wanted, and with a 
smaller footprint to boot. But then Reich probably didn’t really want infinite 
time resolution. He just wanted a little bit of swing in an otherwise safely gridded 
time-world. 20 

This prompted the development of a digital rendition of the “phase-shifting pulse gate” 
in a variant of the digital audio programming environment Pure Data (Puckette 1996), 
Purr Data (Bukvic, Graf, and Wilkes 2016, 2017). Such an environment is convenient to 
implement time-based musical systems like the phase shifting pulse gate. The score of Pulse 
Music helps understand how Reich composed for the phase shifting pulse gate system: when 
writing for this hardware device, each note or chord is notated by Reich with a number, 
such as a “(1)” underneath it on the staffs. This number corresponds to the gate which will 
open in any given measure for that note to sound through the system. As such, the system 
is more of a timing device than a synthesizer: Reich used it in its two performances with 
sinusoidal oscillators; it deserves the title of gate. 

5.1.3 A “cover” of Pulse Music 

Based on the tempo of a piece and these gate markings, it is possible to implement 
the system using a series of delays rather than the original transistor gates. For example, in 
measure 2 of Pulse Music , the chord begins to separate in an individual pitch at the beginning 
of the cycle (gate 1) and the rest of the pitches a little bit later, at gate 11. The tempo 
is notated as a dotted half note for 112 beats per minute, so even though there is no time 

20. email conversation with the author, 5/24/2018. 


60 


measure # 


equal temp 

name 

note# 

1 2 

3 

4 

5 

6 

7 

8 

9 

10 

11 

12 

13 

14 

15 

16 

17 

18 

19 

20 

440 

A4 

8 

1 11 

21 

31 

41 

51 

61 

61 

71 

81 

81 

81 

81 

81 

101 

91,81 

81,91 9171 

91, 71 

101,71 

392 

G4 

7 

1 11 

21 

31 

41 

51 

61 

71 

81 

91 

101 

101 

101 

101 

81, 91 

111,101 

1, 101 101, 1 

101,1 

91, 1 

329 63 

E4 

6 

1 11 

21 

31 

41 

51 

51 

51 

61 

71 

71 

61 

61 

71,61 

61,71 

61,71 

6171 6181 

61, 81 

81,61 

293.66 

D4 

5 

1 11 

21 

21 

31 

41 

41 

41 

51 

61 

61 

51 

51 

51 

51 

51 

51 

51 

51 

51 

261.63 

C4 

4 

1 11 

21 

21 

21 

21 

21 

21 

21 

21 

21 

21 

21, 31 

31, 21 

21, 31 

21, 31 

21,31 2131 

11,31 

31, 11 

246.94 

B3 

3 

1 11 

21 

31 

31 

31 

31 

31 

31 

31 

41 

41 

41 

41 

41 

41 

41 

41 

41 

41 

220 

A3 

2 

1 11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

11 

21 

21 

16481 

E3 

1 

1 1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

1 

111 | 

111 

111 


lo to hi delay values, multiply by 9 for milliseconds at original tempo markings 


Figure 5.2: A re-writing of Reich’s score for Pulse Music as a table. 


signature, it is possible to calculate that each 120th of a measure is roughly 9 milliseconds 
long. In other words, multiplying the number of the gate (between 1 and 120) for a specific 
note by 9 equals the number of milliseconds that note needs to be delayed by to sound at 
the proper location in the measure. Purr Data allows this multiplier of 9 to be dynamically 
readjusted based on the master tempo clock. 

Based on this information, the process of transforming this into a digital system began 
by translating the score into a table detailing the pitches used and at which gate, see figure 
5.2. There are 8 rows, one for each pitch notated, and 23 columns, one for each of the score’s 
measures. Each cell shows at which gate number a pitch is assigned to in a given measure. 
Because some pitches repeat in some measures, some cells include 2 numbers, up to twelve 
simultaneous pitches if you count these repeating ones. The greyed cells correspond to values 
which do not change from one measure to the next. 

The pitches included in this piece as written by Reich in his score are notated in 
figure 5.3. In Pulse Music , the 12 individual pitches are audible when the chord is fully 
arpeggiated in measures 16 through 20. In red I’ve highlighted what I believe to be a 
mistake in Reich’s score: at measure 18, the gate number for the E3 is marked 101, when 
there is another pitch already assigned to that gate (a G4). Furthermore, that E3 is notated 
at gate 111 in both the preceding and the following measures. If the 101 marking is used, 
this results in gate number 111 to be closed altogether for that measure, breaking the pattern 
of arpeggiation built by Reich up to this point. For these reasons I believe that Reich meant 
to notate 111 in measure 18 under the E3 but mistakenly published (101) in the score used 
for this publication. 

Identifying the potential error in the published score illustrates how a knowledge of the 
system and the piece enables a critical understanding of works of handmade electronic music. 
The fact that pitches repeat was not obvious to me in my first encounter with Reich’s score 











61 



Figure 5.3: The opening chord in Pulse Music(for Phase Shifting Pulse Gate). 

These pitches spread into an arpeggio over the course of the compo¬ 
sition, before returning to a chord at the end of the piece. 

for Pulse Music - therefore, rather than implementing the system with 8 gates and an easy 
mechanism which would allow for an arbitrary number of openings, 1 implemented a delay 
line system which could individually delay the open and close signal (a pulse) individually. 
Rather than retrofitting the system for multiple pulses, 1 made extra gates, one for of the 
notes played when the chord in Pulse Music is fully arpeggiated. 

The Purr Data patch consists of twelve sine oscillators which generate the eight pitches 
you hear in the piece. For each clock cycle, a pulse, filtered as described above to simulate 
a fitting switching speed, gets written to a delay line. That delay line allows that envelope 
to be delayed to the proper 120th subdivision of the measure as notated by Reich. When 
the envelope finally does get read, it gets multiplied with the output from the oscillator, 
opening the pulse gate after it’s been “phase” shifted. In Michael Johnsen’s terms, this is 
“bulk-time delay” phase rather than the “signal phase” discussed in other contexts such as 
signal processing. 

I turned the above table of open gates per measure into a series of number messages 
in the pure data patch - clicking one parses all the values to the corresponding oscillators. 
I could have automated the whole piece, setting a number of delayed bangs to the patch 
could play Reich’s score itself, but Reich does not specify the number of repetitions of each 





















62 


measure in the score. Therefore, it seemed best to let this variable be controlled by the 
player. 

Developing this further, I want to focus on the next imprecision in the documentation 
left by Reich for Pulse Music, the shape of the volume envelope for each gate. One thing that 
becomes clear when utilizing a digital environment to program an analog of the original device 
is that the opening and closing time for each of the gates becomes an important element 
of the system. A very quick opening and closing will result in clicks at the beginning and 
end of each note. At the speeds and density notated for this piece, the resulting timbres of 
these overlapping clicks can have a dramatic impact on the overall rendition. Equivalently, 
a smooth rise and fall in amplitude at each gate open-close cycle results in smooth, almost 
marimba-like timbres, especially when combined with sinusoidal oscillators. The switching 
speed of each gate brings to attention the original materiality of the system: that speed is, 
at its faster end, directly dependent upon the characteristics of the transistors used for the 
circuit by Reich, probably based on Flooke and Owens’ recommendation and/or what was 
available to him locally. At the slower end, it is possible that Flooke and Owens recommended 
the addition of an RC lowpass filter to soften any clicks, had they been present. Any such 
filter could have had some influence on the timbre of the overall device. 

At the time of writing, I was not able to either ask Reich, Flooke or Owens what tran¬ 
sistor they used or recommended, or if discontinuities because of a fast-switching transistor 
required additional circuitry. Neither have 1 found a convincing list of transistors likely to 
be available in New York City electronics stores in 1968-1969. This points to some potential 
future work in history and economic history of technology. 

5.1.4 Interface 

As a response to this missing information, when implementing my digital phase shifting 
pulse gate, I have parametrized the unknown characteristics of the system. For example, a 
knob controls the rise and fall times of each gate. I can perform Pulse Music with any setting 
between an instantaneous jump and the maximum rise/fall times possible (these depend on 
the tempo at which the piece is played). I can also control this parameter separately for each 
gate. This gives more control to the user to affect this parameter as a compositional decision, 
rather than a materially derived fact. In other words, re-iterating the phase shifting pulse 
gate based on the high-level block diagram rather than a low level circuit schematic shifts 



63 


the agency of the composer and performer to a different compromise with the materials at 
hand. In my particular reiteration it does not shift it all the way into the hands of the 
composer, because the parameter does not allow for any arbitrary rise and fall pattern, but 
it does make it an explicitly controllable parameter which can be modified in real time by 
the performer. 

Another realization in the process of building the interface related to the possibility of 
adjusting each of the gate values oscillator by oscillator, or devising a way that could update 
all of the oscillators at the same time. Reich’s article does not give much information about 
this. The mechanism which allows, for example, a G4 to be played at both gate value 81 
and gate value 91 in measure 15 of a performance of Pulse Music is unclear. The picture 
of hardware included in Reich’s article seems to show a few knobs per gate, leading to the 
possibility that each gate had its own controls. It is likely that each gate had to be opened 
and its value updated one by one, but this is speculative. Nevertheless, one can imagine the 
drastically different experiences resulting from all gates being changed at the same time or 
having a human operator update them one at the time. 

This digital reimplementation allows for both types of experiences to be implemented 
for testing. Each gate is fed the number value corresponding from the score, allowing each 
gate to be updated independently if desired. Alternatively, lists of 12 values can be pro¬ 
cessed to update all 12 gates simultaneously. This technical choice and its audible musical 
consequences became clear only after experimenting with the system. 

Other parameters were also left undiscussed in Reich’s original article. Intonation is 
not specified in the score or the chapter describing the piece. In my system, each pitch is 
tuned to A440 equal temperament frequency values (listed in the diagram above). However, 
there are also controls allowing any other temperament or pitch combination to easily be 
inputted. New arpeggiation patterns for the chord can also be easily written by making new 
sets of delay values. In the Pure Data environment, such lists of numbers are contained in 
what is called a number object: clicking any number object in the patch connecting to the 
delay value parsing system will automatically separate and distribute the twelve values in 
that list to the proper gate. There are two more controls which relate to my implementation 
rather than to Reich’s original design: the width of the gate pulses, which can be set to 
longer intervals than l/120th of a measure, and the phase of the pulse, which need to be 
synchronized to perform Pulse Music as notated. These are byproducts of the mechanism I 



64 


chose to implement my digital phase shifting pulse gate, once again reflecting the different 
musical affordances and constraints which accompany each technical design. 

The only continuous control explicitly included in Reich’s original device is the clock. 
This is a control setting the length of a measure, and therefore of every other musical event 
in Pulse Music or produced by the phase shifting pulse gate. In my system the clock is also 
included as a continuous global parameter which affects every other time-related event in 
the system. 

The Pure Data interface for this implementation is quite dense but all controls fit on 
a laptop screen for performance.This recreation of the system was performed according to 
Reich’s original score (the third time the piece had ever been performed, and the first in 
roughly 50 years) at the Arete gallery in Greenpoint, NYC in March 2019, on invitation 
from Nicholas DeMaison’s Waveheld Ensemble. 

5.1.5 Conclusion 

Although this is conjecture pending discussion with Reich, the truly unique design 
(especially for 1968) of the system makes it likely that it is identical or close to Reich’s 
initial plans for the device. His working with skilled engineers makes it likely that there 
was little modification of the system. Reich would have been unlikely to spend over a year 
designing this fairly unique gating mechanism and enrolling two circuit design expert in the 
process if it had not appeared as necessary to him. The phase shifting pulse gate probably 
does exactly what Reich wanted it to: that is also probably why it failed to meet Reich’s 
hopes. Therein lies the paradox of handmade electronic music, where the experimentalist 
may spend an eternity perfecting an instrument by trying only exactly what they want to 
try. The disappointment he felt with the results of combining that piece with this system 
were likely due to, paraphrasing Johnsen, Reich’s attachment to a gridded concept of musical 
meter. Clarifying this, at least for myself, required not just looking at the diagrams, but 
also attempting to build some version of the system described by the diagrams. In doing so 
one can not only hear Pulse Music for the first time in over fifty years, but also have an as- 
informed-as possible understanding of the musical structures (rhythms, harmonies, timbres) 
it might have contained, as hinted by Reich’s article. 

To generalize, the handmade electronic music process tends to be, in signal compres¬ 
sion terms, a lossy encoding: it is not possible to recover all intentions and contexts from 



65 


technomusicological artifacts. Imprecisions such as the lack of information regarding the 
switching time of transistors in 1968, or the hypothetical presence of an additional filter, are 
generative: rather than a canonical rendition of the piece, impossible because no recording 
was made in 1969 that could serve as a template, we now have a range of potentially accept¬ 
able performances that correspond to all the permutations of the state of the system over 
the course of the piece. Maybe Reich wasn’t able to use 12 matched transistors, and they 
sounded slightly differently dicky from the other? Maybe they did implement filters, but 
tuned them all differently? Maybe he had a system to change the open and closed gates for 
all 12 oscillators at once, or maybe he changed the time placement of the open gates oscil¬ 
lator by oscillator. With each question a set of compositional and performance possibilities 
arise. Answering these directly benefit from experimentation and knowledge of the context: 
based on Reich’s lifelong oeuvre, and the tone of the documentation written by Reich about 
it seems likely that he would have preferred the smooth transitions of a slow opening and 
closing speed. At the notated tempo, this is roughly a third of the gate duration, or three 
milliseconds. For comparison, consider that the 2N3904 small signal transistor, which was 
developed in the early sixties by Motorola, has switching on and off speeds in the nanosecond 
ranges. In the case of the phase shifting pulse gate, the switching speed limit - the enemy 
of fast digital computing systems - may have in fact served to emphasize and make more 
explicit Reich’s aesthetic leanings. 

This case study illustrates how a simple system diagram and a relatively short set 
of contextual information can go a long way in re-creating an effectively forgotten work 
of handmade electronic music, even without additional input from the original author, or 
recordings of the original piece. This process of re-invention illustrates the importance of 
individual components such as the gate transistors and their hypothetical switching time. 
Digital modelling, to some extent, allows these materially-derived decisions of hardware to 
be taken explicitly, shifting agency from circumstance to intention. 

Had there not been a block diagram, a short article and a score, this project would 
have been significantly more speculative. Not impossible - but the information in Reich’s 
article is enough to confidently say that my replica is a relatively close approximation of the 
system. Take any of those three elements - the text, the score, or the diagrams - away, and 
that would not be the case. 

The missing information, in turn, proved to leave enough open doors to justify the 



66 


development of a new piece based on the same digital re-invention of the phase shifting pnlse 
gate. For more information about the latter please refer to the next section. 

5.2 Pulse Music Variation 

Having developed a functional model of Reich’s “Phase Shifting Pulse Gate” 1 realized 
there were a number of variations which could be attempted to address Reich’s self-criticisms 
of rigidity. Continuing work within the Purr Data visual audio programming environment, I 
implemented a mechanism which enables individual notes to glide from one gate to another, 
rather than jumping instantly. This is done using a “line” object and feeding it a pair of 
control values containing the destination value and the time over which to generate the fade 
from the starting value to that destination value. In Purr Data, objects are subroutines that 
are called via their name: the “line” object will process a list of three values to interpolate 
between the first and the second value in the time indicated by the third value (in millisec¬ 
onds). A value of 0 causes the values to jump instantly, this is the setting with which I 
perform a “regular” “cover” of Pulse Music. A value of a few seconds, at the speed prescribed 
by Reich, allows the listener to hear each pattern transform into the other one, rather than 
jumping to it. 

This more gradual process produces moments of relative chaos, where each pitch is 
gliding within each pattern and briefly overlapping with other pitches. None of this happens 
accidentally - the result is purely predetermined by my variation on the system and Reich’s 
original score - however this process was certainly not anticipated in Reich’s discussion of 
Pulse Music. A playthrough of Pulse Music Variation is available (and is linked to in 
appendix E). 

This is only the first Variation planned. Following the suggestion of Michael Johnsen, 
as discussed in the section on Pulse Music above, I also intend on making a version which 
controls the pulse length with better time-resolution than 120ths of the measure, first in 
software and then in hardware. 

Another relatively straightforward variation would be to split the phase cycle of each 
of the 8 sinusoidal components into 120 frequency dependent segments and iterate the piece 
with the gate values notated by Reich, both as instantaneous jumps and smooth ramps 
between each values as discussed briefly in the preceding paragraph. Although this would be 
straightforward to implement in software, I also hope to build an analog electronics version 



67 


of such a phase-shifting circuit. 

5.3 Paul DeMarinis’ 1973 Pygmy Gamelan 

Paul DeMarinis is one of the members of David Tudor’s “Composers Inside Electronics” 
ensemble, joining the group after the 1973 premiere of Rainforest IV in Chocorua, New 
Hampshire. Pygmy Gamelan , also from 1973, is a car-radio shaped device which featured 
five lights on its front panel, which generated five note patterns when it was connected to a 
set of loudspeakers. This system was used in a number of pieces: here I’ll focus mostly on the 
eponymous installation as well as a composition recorded with it, Forest Booties (1979). The 
work could here be thought of as the arc of investigation and use of small pattern generation 
systems as installation. As we’ll see, this case study reveals that the Pygmy Gamelan as it 
is publicly known represents only a fraction of the work done by DeMarinis on this project. 

5.3.1 Materials available 

Paul DeMarinis’ work was recently assessed in a retrospective book of his career which 
included a chapter on the 1973 work Pygmy Gamelan (Beirer 2011). There, Fred Turner, 
professor of Communication at Stanford University reprints DeMarinis’ description of his 
own work, originally included in the liner notes of the Lovely Little Records compilation 
issued in 1981 (Lovely Music 1981): 

The Pygmy Gamelan is an installation piece... which responds to fluctuating 
electrical fields (generated by people moving around, radio transmissions, the 
births of distant stars and galaxies) by changing the patterns of five-note melodies 
it plays. The integrated circuits used in The Pygmy Gamelan are all inexpensive 
surplus items originally intended for consumer products. Their use here, however, 
is purely folkloric and tends, like the hand-carved plastic artifacts which must 
circulate in primitive societies, to refer to a culture other than that of high 
technology. (Turner 2010, 23) 

Using the terminology previously established, the Pygmy Gamelan device is an instance 
where the system and the piece overlap significantly. Quoting the DeMarinis again: 

That was a piece in which that is the score—that is, the instrument, that is that 
object that does that thing. I was somewhat of a zealot about that idea, of not 


68 


wanting to make instruments, not wanting to make general-purpose instruments. 

I thought of myself as thinking much more in the culture of art, making objects 
that were pieces, sometimes requiring performances, sometimes not, sometimes 
standing alone. (Ouzounian 2010, 12) 

Another interview provides some welcome additional context to the frame of mind that 
accompanied the original intentions behind the work: 

Renny Pritikin: I met you in ’74, so Pygmy Gamelan dated to just before then. 

Do you include it because it was an interactive sculptural installation? 

Paul DeMarinis: Well, it was before those categories really existed. I had been 
building electronics for a couple of years, and everyone around me had the idea of 
building synthesizers, instruments to use to do something else—a performance. 

I wasn’t interested in that even though I was doing performances then, too. I 
wanted to make pieces that stood on their own as artworks. The problem was 
what to do with Pygmy Gamelan, how to place it, and in thrashing around with 
this problem, I encountered the three terms you defined in your astute question. 

But my first idea was to mass-produce and market it as a replacement for the 
car radio; like Max Neuhaus, I have always had some inclination toward placing 
my pieces in quotidian, even useful, situations. I met a Detroit auto designer and 
showed him the Pygmy Gamelan and explained how it would play in response to 
driving through the varying electromagnetic fields of urban landscapes. Nothing 
came of it, needless to say. 

DeMarinis shared some schematics for this speculative and, unfortunately, never mass- 
produced car radio, on his website, along with some other technical diagrams and code for 
other pieces described in Beirer’s Buried In Noise retrospective publication mentioned above. 
This electrical schematic for Pygmy Gamelan is included below in figure 5.4. DeMarinis 
originally published this schematic, along with a brief discussion, in a journal of musical 
scores called Asterisk (DeMarinis 1975). 

The block diagram used to discuss the phase shifting pulse gate in the preceding section 
is a specific type of technical abstraction of the system it describes. It schematizes the core 
functions included in the circuit, and showed how they were connected to offer the desired 


69 


R. R. 



Figure 5.4: Paul DeMarinis’ schematic for the Pygmy Gamelan device (1973). 

Credits: Paul DeMarinis. Used with permission. 

behaviors. A circuit schematic, on the other hand, is an abstraction of the electrical compo¬ 
nents used to build the system. It offers a graphical representation of these components, as 
well as their interconnections, and general details about their functionality since electronics 
are, like music in some ways, a time-based medium. 

In electrical schematics, each symbol corresponds to a specific component rather than 
a specific function. Reading a schematic, then, consists of identifying these components and 
how they connect. 21 

Also made available by DeMarinis on his website is a recording of the device, left 
to operate alongside held recordings of a forest on Forest Booties (DeMarinis 2016b). To 
summarize, where to discuss Pulse Music we only had a score, Reich’s article, and a block 

21. For additional details on how to read schematics, see, for example, section 4.4 “Reading Schematics” 
from Catsoulis 2005, Herrington 1986, or Johnson 1994. 



































































70 


diagram of the phase shifting pulse gate, for Pygmy Gamelan we have a circuit diagram, 
various notes by Turner and DeMarinis, and a recording of the device. 

5.3.2 Analysis of the system 

The system uses relatively random sources to generate digital patterns which are then 
turned into 5-note melodies. This functionality is provided by bit shift registers, a type of 
device built into an integrated circuit. For the phase shifting pulse gate, Reich used transistor 
gates to open and close signal paths to the loudspeakers, resulting in the chord-to-arpeggios 
pattern typical of Pulse Music. An integrated circuit is a medium which contains numerous 
transistors in a single component, combining transistors’ basic abilities to switch and amplify 
small signals in order to perform more complex tasks (Dummer 2013, 11). 

A shift register is an integrated circuit with an input and a number of outputs. As bits 
(zeros and ones, the basic signals of digital communications) are fed to the input, they travel 
down a stack of memory slots (each of the register’s outputs). A clock controls the rate at 
which they go from one slot to the one below it. When a bit reaches the bottom-most slot 
of the register, it gets discarded at the next clock tick. 

Here, DeMarinis uses 5 of the 16 outputs of two eight-bit shift registers, with the noise 
generated by radio antennas as the input. With this, DeMarinis creates a system capable of 
independently generating patterns with some form of memory, which then get turned into 
sound. He prefigures Nicolas Collins’ approach of misusing early integrated circuits meant 
for digital computing as sound generators. However, rather than producing pitches via a type 
of oscillator, (the basis for much of the circuits in Handmade Electronic Music is a square 
wave oscillator known as the Schmitt trigger) DeMarinis here passes the signals produced by 
the registers to tuned bandpass filters. Consider the signal produced by the shift registers: 
as a “1” goes to a “0” or vice versa, a discontinuity in voltage is produced. This discontinuity, 
if it were plugged into a speaker, would be heard as a click. To give this click a pitch, it is 
fed through an electronic audio filter: the resonant frequency of the filter gives the click a 
timbral characteristic which is determined by the values of the components in the filter. 

DeMarinis’ schematic does not include these values for the bandpass filters. There¬ 
fore, it is impossible to determine the pitch and resonance of these pitches using only the 
schematic. Nevertheless, this schematic offers a significant amount of detail concerning the 
mechanics of the work and how it functioned. It is worth noting that the resonant filters in 


71 


the original circuit are quite unique - a variant of the bridged-T filter described by Ralph 
W. Burhans in his 1973 paper “Simple Bandpass Filters” with an extra capacitor (Burhans 
1973). 22 By coincidence, they are also the filter type described in an advertisement for 
the “and graphics” graphic design company, included on page 6 of the November/December 
1976 (Volume 1, Number 4) issue of Synapse Magazine (Schill and Lynner 1976), but with 
no match in the artistic or technical literature over-viewed in this project. This is notable 
in the sense that a thorough review of standard circuits did not yield exact matches, and 
therefore, DeMarinis’ circuit may actually be an original third order active filter with unique 
timbre and resonance characteristics - which, as Snazelle noted, is quite a rare occurrence 
(Teboul 2015, 151). Unfortunately, the exact process that led to this filter design has been 
lost considering the length of time that has lapsed since its making. This illustrates the 
advantages of documentation being accumulated contemporaneously, in cooperation with 
innovative artists. 

Nevertheless, there are a number of strategies one can use to remedy such a lack. 
Examining the circuits themselves, or pictures of the circuits, would be the preferred one 
considering it is likely that some of these devices remain in good condition according to 
DeMarinis, unfortunately this has not been possible in the time frame of this project. The 
next best solution would be to check any schematics other than the one shared by DeMarinis 
and copied above - this would allow for a comparative approach to be implemented in more 
detail, even if there is no guarantee that the markings on a schematic correspond to the 
reality of the built objects. However, again, obtaining such additional documentation has 
also been impossible up to this point. With the information at hand, the remaining solution 
is to write out a circuit analysis of the filter section, obtain a transfer function, and deduce 
the missing component values from the q and center frequencies audible in Forest Booties. 
This is done using the resonant frequency and q determined from the recording to £11 in 
the missing values. The next two sections describe how the development of digital models 
for the circuit, along with an analysis of the Forest Booties recording, informed a deeper 
understanding of the piece. 

22. This design was mentioned in an email communication with the author, 6/4/2019. 


72 


5.3.3 Covering Pygmy Gamelan 

To test my understanding of the system, I developed two digital models. The first one 
is interactive, meant for real time listening and experimentation. It is coded in the visual 
language Purr Data. Purr Data, like the Pure Data software that it is based on, allows users 
to write “externals,” small programs which extend the functionality of the core software. The 
“mrpeach” library of externals included in Purr Data has an external, coded in the computer 
programming language C, replicating the functionality of an 8 bit shift-register, a CD4094 
integrated circuit (mrpeach 2020). Contrary to Pulse Music , which was never recorded, Paul 
DeMarinis shared a 10 minute piece from 1978 called “Forest Booties” involving the Pygmy 
Gamelan on his website. Having a schematic and a recording makes the replication process 
both straightforward to implement and to verify the accuracy of. 

With the mechanics of the shift registers being taken care of by this external object, I 
developed an alternative to feed data to the registers: the original hardware uses the noise 
picked up by an antenna. I used the sound provided by the internal input of my laptop 
microphone. I then connect the output of the shift registers to the filters, to a digital-analog 
converter which sends the resulting pitched noise to the speakers. 

But how can one fill in the gaps for the missing filter values? Familiarity with compa¬ 
rable filter designs indicates that most of the behavior of a bandpass filter can be understood 
from two values: the resonant frequency of the filter, in hertz, and the resonance of the filter, 
named q. This q value stands for “quality factor” and is a unit-less measurement proportional 
to the width of the band of frequencies passed by the bandpass filter. 

A Fourier transform is a mathematical tool which allows time domain information, 
such as the amplitude values of this recording, to be transformed to frequency domain 
information. Practically, this means a graphic representation of the frequencies most common 
in a digital recording can be computed. In Forest Booties the sound of the Pygmy Gamelan 
is complemented by field recordings of a forest, consisting mostly of complex, non-pitched 
or noisy timbres. The five pitches produced by the machine are the most common pitched 
content in the recording. Therefore, the five most distinctive peaks in a frequency domain 
representation of the recording should indicate the center frequency of each bandpass filter 
included in the circuit. 

Fourier analysis indicates that the bandpass filters were tuned to 406 Hz, 833 Hz, 946 
Hz, 1166 Hz, and 1465 Hz. These are not equal temperament standard pitches. I initially 


73 


assumed that this was a reflection of the components available to DeMarinis, but he rectified 
this interpretation, stating: 

Although I did not make my own components to specification, I had access to 
many dozens of resistor and capacitor values from the rich troves of Mike Quinn 
Electronics. 1 chose the values very carefully, by ear, to achieve the tunings I 
wanted. It was an aesthetic exercise. I often de-soldered combinations of R’s and 
C’s that didn’t sound well together with the other tones. 23 

It was later pointed out to me that a pentatonic scale, called slendro was common in 
traditional Javanese music as played on “regular” Gamelan. 24 As the tuning of these scales 
is not rigidly defined and materially dependent (on the other instruments in the ensemble), 
it seemed plausible that DeMarinis had perhaps implemented his own version of “octave 
stretching.” (Li 2006) Slendro scales are sometimes discussed, because of their relative and 
variable tunings, in terms of cents. Working from low to high, the cent values for the 4 
intervals audible in the Forest Booties recording are, working from low to high: 

• from 406 Hz to 833 Hz: 1242.7 cents 

• from 833 Hz to 946 Hz: 220.7 cents 

• from 946 Hz to 1166 Hz: 358.4 cents 

• from 1166 to 1456 Hz: 394.6 cents 

Unfortunately, these values do not correspond to segments or inversions of segments of 
the slendro scale as discussed by Li, even considering its wide local variations (2). 

DeMarinis offered this final clarification regarding the tuning process of each of the 
Gamelans as electrical, rather than acoustico-mechanical devices: 

I was familiar with pelog and slendro tunings in 1973, mostly via a recording 
“music of the venerable dark cloud” by the UCLA gamelan [ensemble]. But 
to answer your underlying question in more depth rather than just providing 
circumstantial hints to support a trending argument: 

23. Email exchange with the author, 5/7/2020. 

24. This suggestion came from Dr. Nina Young, one of my dissertation committee members - thank you 
again. 


In my work sounds are more or less stand-ins, place holders. But if you hire 
somebody to hold your place in line you never know what the situation might be 
when you get back. When signs happen to have an inherent meaning it is often 
in irrelevant distinction to whatever they are assigned to denote. This is one of 
the effects that makes language-like activities such as musical ones so interesting 
to engage in and so potentially rich to experience. 

At the 1988 composer-to-composer event in Telluride I attended a memorable 
discussion between Terry Riley (a true believer in affektenlehre) and Brian Eno, 
who long and hard argued about whether scales are meaningful because they have 
an inherent intervallic power (Riley - to heal, to move mountains, to command the 
human soul) or whether any group of 7 or 8 evenly dispersed intervals would, with 
sufficient repetition and noodle-ing, per-Eno, become equally plausible musical 
scales - i.e. - a material to be shaped by the musician. Of course in the end it was 
the customary clash of ego with non-ego, replete with the familiar immiscibility 
of profound belief and savoir-faire. The irresolvability of that discussion pretty 
much sums up my experiences with “scales.” 

The reason I bring this up in relation to your question is that my process for 
tuning the Pygmy Gamelans had been by electro-ear, rather than trying to hew 
to any ready-made scale. With 5 notes, it is too easy to drop into some pentatonic 
(here i mean the black keys on the western piano) drivel. It was important that 
each note hold up its own weight of the feeling. My process, if I recall correctly, 
was to select a set of resistors and capacitors from my stock on hand (using 1% 
or 0.1% resistors, and capacitors as closely matched as I could sort them this is 
the electro- part), solder them in and then listen. If I didn’t like them I would 
change things until it felt 1) unique from all the others circuit boards and 2) 
interesting enough that I could listen to it long term (days and nights upon end). 
I did run into some dead-ends where 4 pitches would not allow a fifth to exist, 
or the values precluded the filter from ringing sufficiently (usually from poor 
component matching or drift) and there was at least one circuit board that was 
such a disaster from re-soldering that it was useless. 



75 


So yes, I was subject to the foibles and ruts of my previous listening experiences. 

By the time 1 was studying music as an undergrad, 1 had discovered modality in 
the songs my grandmother sang while hanging up the laundry. By the time of 
the Pygmy Gamelan 1 had heard recordings of a lot of “world musics” and had 
studied both Carnatic (as an undergrad) and Kirana school (with Pran Nath 
at Mills) Indian singing, and had played Sheng in a Cantonese music ensemble. 

1 knew my way around the sounds and the jargon, but still can’t hear in that 
recording - it sounds more like a five-note subset of an equal-7-note (thai/khmer) 
scale to me... 25 

The similarity with the process of tuning a sound system by ear, as discussed by 
Henriques in chapter 1, is striking. This is also a clear example of a technical analysis 
qualifying or clarifying the intent of the composer: informing which discussions to have, and 
which questions to ask. 

Developing a Purr Data model for the system allows for the q to be adjusted by ear. 
There is a bandpass filter routine included in the core Purr Data functionality which is also 
defined by a bandpass center frequency and q. Therefore having the resonant frequency 
allows for the signals produced by my digital reiteration to be compared with the signals 
audible in Forest Booties and the q adjusted experimentally until they exhibit a comparable 
level of resonance. Via listening, 1 determined that q=500 is the proper value for this 
bandpass filter routine to produce virtually identical sound. Listening also helped determine 
the rate for the overall clock, which seems to run at about 110 to 130 ms/cycle: the clock 
seems to be varied in the Forest Booties recording. The interface for this Purr Data program 
(called a patch) is included in appendix D. 

The potential complexity/output density of the hardware and its software counterpart 
is somewhat self-regulated by the fact DeMarinis only used 5 of the 16 shift-register outputs. 
This is certainly a compositional decision, shaping the output of the work by establishing 
a distinctive mode of connection between electromagnetic / audio noise and the resonators’ 
outputs. Changing which registers get fed to the Liters changes the pattern. 1 originally 
interpreted which outputs of the shift registers DeMarinis had selected as somewhat random, 
since the input is somewhat randomly turned into binary input data at an arbitrary rate. 

25. Email exchange with the author, 6/4/2020. 


76 


However, reviewing the original publication which included this schematic (DeMarinis 1975) 
suggests that a significant amount of thought went into the rhythmic patterns generated by 
each instance of the device. DeMarinis later confirmed this with his perspective: 

1 thought of the choices as being very important as they determine the metric 
structure, especially noticeable when only one pulse is moving along. 1 regard 
such choices as highly deterministic and not random. 26 

5.3.4 Mathematical analysis of the active bandpass filter in Pygmy Gamelan 

Between the information obtained from analyzing the recording of Forest Booties and 
that contained in the Pygmy Gamelan schematic, it is also possible to derive a mathematical 
formula which allows us to deduce some likely values for the unmarked component in the 
schematic (Figure 5.4). This model, although not quite as convenient as a Purr Data patch in 
terms of interactivity and musicality, is nevertheless valuable from a circuit comprehension 
perspective. Keeping in mind that reverse engineering’s goals are to “increase the overall 
comprehensibility of the system for both maintenance and new development,” (Chikofsky 
and Cross 1990, 16) we can imagine how these two models might serve these complementary 
objectives. 

The following paragraphs detail one approach to this problem using a modified nodal 
network analysis (Ho, Ruehli, and Brennan 1975). Network analysis is a method for analysing 
electrical circuits which considers the voltages across and the current through every electrical 
component in a circuit. A nodal network analysis, also called a node-voltage analysis, is 
based on a simpler analysis method called loop analysis. In loop analysis, each closed loop 
in a circuit diagram is evaluated using Kirchhoff’s voltage law (which describes how the 
sum of voltages around a closed loop is always zero) to derive a mathematical equation 
which describes the voltage and current relationships across and within that loop. Using the 
concept of nodes allows one to extend loop analysis, using Kirchhoff’s current law (which 
states that the sum of currents in a network of conductors at a point is zero), to develop a 
complete mathematical solution describing all current and voltage relationships for systems 
of many nodes (Boylestad 2003, 278). “Nodal analysis is an organized means for computing 
ALL node voltages of a circuit.” (DeCarlo and Lin 2009, 110) 

26. Email exchange with the author, 5/7/2020. 


77 


A node is a connection of two or more termination of components (related to the star 
network discussed in the next section). Quoting Boylestad: 

If we now define one node of any network as a reference (that is, a point of zero 
potential or ground), the remaining nodes of the network will all have a fixed 
potential relative to this reference. For a network of N nodes, therefore, there will 
exist (N-l) nodes with a fixed potential relative to the assigned reference node. 
Equations relating these nodal voltages can be written by applying Kirchhoff’s 
current law at each of the (N-l) nodes. To obtain the complete solution of a 
network, these nodal voltages are then evaluated in the same manner in which 
loop currents were found in loop analysis. (Boylestad 2003, 278) 

Nodal network analysis has disadvantages. In its basic form as described by Boylestad, 
it cannot deal with voltage sources, controlled voltage sources, and current-controlled linear 
elements (Ho, Ruehli, and Brennan 1975, 504). The active filter used by DeMarinis in the 
Pygmy Gamelan circuit is based around an LM3900, which in this case can be thought of 
as a voltage-controlled voltage source with high gain (Texas Instruments 1972, 9): deriving 
a mathematical formula to describe this circuit therefore requires a model to be able to 
calculate some of the currents at play in some of this circuit’s branches, and because of 
that it is necessary to address this issue of nodal circuit analysis. A modified nodal analysis 
introduces additional terms into the matrix notation it uses to describe the nodes in order 
to engage with the voltage source, since in this case the voltage source is current-dependent 
because it is connected to a resistor (DeCarlo and Lin 2009, 108). 

In this analysis we will specifically be modelling the resonant filter circuits, labeled as 
such by Paul DeMarinis and bounded in red in figure 5.5, which will be recognized as a detail 
of figure 5.4: 

In this discussion, I’ll be using the hierarchical nomenclature developed for reverse 
engineering by M. G. Rekoff (Rekoff 1985, 245). Because we are operating with sets of 
components, we are operating at the level above, the subassembly. 

However, to consider the resonant filter subassembly, it will be necessary to consider 
what precedes it. At the higher level of the assembly, the previous subassembly is what 
DeMarinis calls the filter inputs. These are designated in blue in figure 5.5 and presented in 
an idealized form in figure 5.6. 


78 



Figure 5.5: A detail of the Pygmy Gamelan schematic focusing on the subassem¬ 
blies to be studied. 


0.001 pF 



10 MQ 


Figure 5.6: An idealized schematic of one of the filter input subassemblies in 
Pygmy Gamelan. Based on a drawing by Kurt Werner. 

Each of these filter input subassemblies are connected to the outputs used on the bit 
shift register. Here the 10 megohm resistor to ground is the last component preceding each 
filter, which are detailed in figure 5.7. It is large enough to virtually isolate the filter from 
the pulse shaper circuit formed by the O.OOluF capacitor and LN462 diode connected to 
the outputs of the bit shift register integrated circuits. i; n in figure 5.6 corresponds to ij n in 
figure 5.7. 

Indeed, figure 5.7 shows an idealized representation of the relevant section of the circuit 
where the extraneous and active components have been replaced with their idealized equiv¬ 
alents: a voltage source (Vi n ) for the bit-shift register subsystem and a voltage source (V ou t) 
for the output amplifier load. The LM3900 integrated circuit is assumed to be equivalent in 











































79 



Figure 5.7: An idealized schematic of one of the resonant filter subassemblies in 
Pygmy Gamelan. Based on a drawing by Kurt Werner. 

this case to a nullor (a voltage controlled voltage source with high gain). In figure 5.7 node 
1 connects the ideal source to the input of the resistor/capacitor (RC) network. Node 2 is 
located in the middle of the capacitors, which all have the same value. Node 3 is at the end 
of the RC network, connecting it to the current source and output. Node 4 is between the 
two R valued resistors at the top of the network. Node 0, the datum, is ground. The single 
oval is the standard symbol for a nullator, while the two overlapping ovals is the standard 
symbol for a norator. Together, they represent the nullor (Carlin 1964; Martinelli 1965) 
which idealizes the LM39000 amplifier. 27 

These assumptions are verified by SPICE circuit modeling. SPICE (Simulation Pro¬ 
gram with Integrated Circuit Emphasis) is a computer-based simulation of electronic circuits 
(Nagel and Rohrer 1971; Kielkowski 1998). Discussing the operation and mechanisms of 
SPICE is beyond the scope of this dissertation, but it can briefly be stated that it uses a 
number of numerical approaches to approximate both transient and steady-state behavior 
of combinations of components whose properties have been adequately translated into the 
program’s paradigm. In our case, SPICE simulation shows that because no functional differ¬ 
ence in behavior is produced in the SPICE model for these two circuits and their frequency 
response, it is likely that the circuit in figure 5.7 is equivalent in the time-domain to that 
drawn by DeMarinis in figure 5.4, and that the circuit in figure 5.7 is stable for a wide range 

27. This and the following derivation are based on initial calculations and modeling by Kurt Werner, 
detailed in an email to the author dated 4/22/2020. 























80 


of high gain values. 

Modified nodal analysis works with admittances, the inverse of resistances: 

G = 1/R (5.1) 

For the purposes of this circuit, it’s helpful to consider R/x. Rewritten as an admit¬ 
tance, that becomes: 


R/x = 1/Gx (5.2) 

By inspection, we can write out the following matrix notation for the modified nodal 
analysis of the above nodes (following the examples detailed in Vlach and Singhal 1983, 
114-119, Boylestad 2003, 286-291, or DeCarlo and Lin 2009, 118-119). 


Gx 

0 

—Gx 

0 

0 

-1 


V 0 


~k n 

0 

G + sC 

-sC 

0 

-G 

0 


V 


Tf in 

-Gx 

-sC 

Gx + Sc 

-sC 

-sC 

0 


V 2 


0 

0 

0 

—sC 

G + sC 

-G 

0 


V 3 


0 

0 

-G 

-sC 

-G 

G + G + sC 

0 


v 4 


0 

-1 

+1 

0 

0 

0 

0 


^nor 


0 


Where s is a time-continuous Laplace variable. Here, because we are interested in 
obtaining a magnitude response at frequency f (in Hertz), we define it as follows: 

s = 2nfj (5.4) 

Where j is the imaginary unit. In a modified nodal analysis, each of the nodes are 
considered in reference to a “datum” node. This node (the first row and the first column of 
the left-hand multiplier matrix) can be removed from the equation. This allows us to rewrite 








81 


equation 5.3 as: 


G + sC 

-sC 

0 

-G 

0 


R| 


k n 

-sC 

Gx + 3sG 

-sC 

-sC 

0 


U 2 


0 

0 

-.sG 

G + sC 

-G 

+1 


U 

= 

0 

-G 

-sC 

-G 

2 G + sC 

0 


U4 


0 

+1 

0 

0 

0 

0 


^nor 


0 


x 

Designating X as the left-hand component in these matrix multiplications, we can 
name its inverse Y: 


X^ 1 = Y (5.6) 

We can then rewrite the matrix multiplication as: 


Ui 


kn 

U 2 


0 

U 

= Y 

0 

g 4 


0 

^nor 


0 


We can solve to obtain a closed form of the output voltage in terms of the input current: 


Us — Y 3 ,ii in (5.8) 

Rewriting this as an expression of s, as defined above, and because the voltage at node 
3 is effectively our output signal: 

H(s) = V out (s)/i in (s) = Y 3)1 (5.9) 


Using the code in Appendix G to perform the matrix inversion yields the following 



82 


closed form: 


H(s) 


Tout (s) 

l / diode('S) 


2 R 3 C 2 


s 2 + [R 2 C(l + %)]s + 2R 


10, 000, 000 J [«] S 3 + [MC!] S 2 + [M] s + 1 


(5.10) 


Where Vdiode is the voltage across the diode placed before the filter circuit, C is the 
value of each of the three capacitors in the RC filter network, R is the value of the “bridge” 
resistors (these are specific to each filter, resulting in the individual pitches produced by the 
system) and R/x is the value of the “stem” resistor, ambiguously marked R/12 in DeMarinis’ 
schematic. In LTSpice simulations, this value (12) yielded unstable filters (these self-oscillate, 
which is not a feature of their real-life counterparts). 

Then, defining: 

s = RCs (5-11) 


We can rewrite this as: 


H(s) = 


2R 




(5.12) 


10,000,000; Q)P + (I)P + (§)« + ! 

This change of variable makes it clearer that the product of R and C changes the center 
frequency of the filter. It also points to the fact that changing R without changing C has 
significant impact on the scaling factor on the left-hand side of the equation. Finally, it 
suggests that the right-hand term of the equation, defined in terms of s, approaches 1 in DC 
operation and 0 at very high frequencies. Overall, this is a low-pass response even though 
locally the filter acts as a bandpass because it resonates. 

Elaborating on this, we can highlight that this implies that changing only C affects 
only that center frequency value, and not the resonance. Furthermore, specific to this unique 
design, increasing the value of R by a given factor and decreasing C by that same factor also 
changes the gain by that factor without changing the resonance or center frequency. This 
appears as quite practical considering the musical tuning application at hand. 28 

This mathematical understanding of the system, in coordination with knowledge of 
common values for resistors and capacitors, can be used to deduce some real and likely values 


28. Based on a email discussion with Kurt Werner dated 5/13/2020. 












83 


for the un-notated (because variable) elements in the schematic, namely, the resistors and 
capacitors in DeMarinis’ schematic. Although a parametrized equation (one which simply 
expresses the q and center frequency of the filter in terms of the values of the components of 
the circuit) is beyond the scope of this dissertation because it so unusual, we can automate 
use the calculated transfer function to plot the magnitude response for different RC pairs 
until close matches to the measure peaks are obtained. 

As mentioned above LTSpice simulation indicates that R/12 (the note arguably legible 
on DeMarinis’ schematic for the stem resistor) results in unstable filters. Therefore, we 
used R/ll. Because running this code is quite slow, it is mostly concerned with matching 
the peak frequencies of 406 Hz, 833 Hz, 946 Hz, 1166 Hz, and 1465 Hz, rather than also 
matching a resonance factor. This is partially due to the amount of time necessary to run 
this simulation. In addition, the resonance factor is more difficult to measure accurately 
from the original recordings than simply the center frequency of that resonance. To further 
speed up this computation, we also used subsets of the standard R and C values. This does 
point to some immediate future work. 

The matching peaks are included in figure 5.8. 



Figure 5.8: The plot produced by the code in Appendix F, showing the five 
closest matches produced by the model to the measured resonance 
peaks in the Forest Booties recording. 


The closest resistance and capacitance values are, per peak: 









84 


• for 406 Hz: R is 20 kfi and C is 33 nF 

• for 833 Hz: R is 4.7kfl and C is 68nF 

• for 946 Hz: R is 13 kf2 and C is 22 nF 

• for 1166 Hz: R is 15kfl and C is 15 nF 

• for 1456 Hz: R is 2.2 kfl and C is 82 nF 

These values were verified in LTSpice to check stability and confirm our model was 
accurate. LTSpice magnitude and phase response graphs are included in Appendix F. 

Of course, resistors and capacitors are manufactured with large tolerances (usually 
between 5 and 20 percent fluctuations from the marked value), so “tuning” a Pygmy Gamelan 
circuit should not necessarily be seen as an act equivalent to that of tuning a band or 
orchestra instrument. Because the device is not necessarily meant to be played with other 
instruments (in Forest Booties it is played with ecological recordings), the idea that it would 
have to be “in tune” with anything other than itself would be self-imposed. In fact, this is 
meaningful because it points to a potential explanation for the frequency peaks measured 
in the recording: they are not a standard scale or combination of pitches, rather they could 
simply be determined by an experimental process of randomly picking from what values 
are available, potentially with a trial and error approach to listening and replacing. That 
standard values match fairly closely with the recorded reality of the system’s output would 
certainly not conflict with this interpretation. 

In other words, this model allows us to combine experimental data with technical 
information and contextual documentation to enable the development of a circuit replica of 
the device, in a manner aware of the dangers of wanting to prescribe the idea of an “original” 
or “ideal result”. In that sense, building a replica of DeMarinis’ device, in addition to being 
outside of the scope of this subsection, and although it certainly is a much more attainable 
objective now that part of this missing information has been addressed, felt unnecessary. 
A functional digital equivalent is now available, documentation has been shared, and the 
objects maintain their mysterious existence in the closets of those lucky enough to receive 
DeMarinis’ gifts. Keeping Chikofsky and Cross’ definition of reverse engineering in mind, 
we have both increased the comprehensibility of the original project and opened it to new 
developments. 



85 


5.3.5 Interface 

The front panel of the original device displays five lights, one for each of the notes in 
the pattern. Once the device is powered, like Steve Reich’s Pendulum Music, the Pygmy 
Gamelan system will produce sounds, playing the piece until it is unplugged. My Purr Data 
system approximates this by beginning to play sounds as soon as it is ran on a computer. 
Because the system is relatively simple (no complex audio routing or setup) it is possible to 
make this a relatively autonomous program, reflecting the original. 

Just like the original, the program has a master clock, which decides how fast the 
shift registers shift bits down their 8 memory slots. A significant departure in my Purr Data 
patch relates to this: rather than using radio signals, which are difficult to access on a laptop 
without additional hardware or software not immediately compatible with Purr Data, I use 
inputs from the microphone now included in most laptops for videoconferencing purposes. 
By programming a simple gate, which produces a 1 if the signal digitized by the laptop is 
loud enough at any given cycle, I produced a reasonably equivalent analog for the original 
electromagnetic antenna used by DeMarinis. 

5.3.6 Conclusion 

Pygmy Gamelan appears to have been built by DeMarinis via his interest in listening to 
the music implicit in the parts easily accessible to him. In other words, it is a co-construction 
of local resources and his ideas. The combination of shift registers and tuned bandpass filters 
is a compact yet interesting way to put together two types of circuits not often seen together - 
even if taken individually, shift registers and Burhans’ filter circuit are very much “canonical.” 
The vernacular aspect of this project falls in line with DeMarinis’ “folkloric” aspirations for 
the system, showing how local variations (the Pygmy Gamelan ) can emerge from standard 
subsystems (a shift register and a circuit published in an academic journal). DeMarinis offers 
some details as to the context which surrounded the initial iteration of the work: 

At the Expo ’67 in Montreal I had seen, in the pavilion of an African nation (I do 
not recall which) a gorgeous set of chess pieces hand carved from plastic i.e. the 
"plastic" material was used in a not for its plastic properties (injection molding), 
but formed by traditional "folk" methods. This experience became emblematic 
for me during the period when I was building pieces from electronic circuitry in 
thinking about how artists could (we would use the verbs "to appropriate" or "to 



86 


re-purpose" now, but those usages didn’t exist then) use technological materials. 

Since I was very much against the idea of "instrument building" or thinking of 
technology as a step toward making art but without becoming part of the work 
of art itself, i thought of those chess pieces as a way into explaining my efforts. 29 

Pygmy Gamelan can therefore be read as an example of “handmade electronic music” 
that acknowledges the musical potential of technology as the center of a project, in addition 
to or even perhaps rather than a tool that enabled a musical project. One can see in the 
schematic how basic integrated circuits, meant to count and amplify things, are in this “folk” 
context used to generate musical phrases. DeMarinis elaborates: 

Of course such ideas today are impossible, as they reek of the mess that is post¬ 
colonialism and I would not ever use them. There are much better ways of 
discussing the relationship between art and technological materials, though it is 
still an open discussion in many ways. 30 

DeMarinis’ self-reflection reveals an equally important question of ethics. He might 
have found the bit shift registers and operational amplifier integrated circuits walking around 
in the electronics shop of the Bay Area in the late sixties and early seventies, and felt that 
using them as part of this device fulfilled the interest he had in this “folkloric” use of technol¬ 
ogy, which was a common thread across his, Gordon Mumma’s and Robert Ashley’s projects. 
Echoing a statement by Ron Kuivila mentioned earlier in this dissertation, DeMarinis was 
taking stock of the musical affordances and constraints not just of technology, but of the 
context surrounding it as well. With this statement, DeMarinis also takes stock of how this 
context may have changed, although the electronics described in the schematic do still exist. 

Therefore, the Pygmy Gamelan is perhaps better considered as an autonomous device 
building an alternate narrative around technology, in contrast to Reich’s phase shifting pulse 
gate, which was more intentionally pre-designed to serve a specific mechanical purpose. 

Further conversation with the artist revealed that DeMarinis created a series of devices 
with variations on the circuit discussed above, as well as an entirely undocumented second 
series of devices with more sophisticated processing: 

29. Email conversation with the author, 1/5/2020. 

30. Ibid. 


87 


I made, all in all, two editions of the Pygmy Gamelan. The first edition consisted 
of 6 of which 4 are extant (that I know of) Some are in museum collections, some 
in private collections and I have a couple. The second edition of ~ 6 [sic] was 
made in 1976 for the CIE show at Musee Galliera and also shown at Galerie 
Shandar. I have a couple, and Laetitia Sonami has one. Not sure of the others. 

Each unit has a unique pitch set and different rhythmic scheme, the second 
edition had much more varied logic sequencers beyond the TTL shift register, 
including one that used the RCA CMOS 1-bit microprocessor! 31 

The existence of multiple circuits, and of multiple versions of the system, explains 
why the filter component values were not in DeMarinis’ schematic. They also fit with a 
“folkloric” version of homemade electronics, one which embraces the natural variation of 
craft objects. DeMarinis elaborates on the political underpinnings of expertise, interactivity, 
and synthesizers, as embodied by the Pygmy Gamelan : 

As I recall, the term interactivity, with respect to art, didn’t really exist then 
and didn’t emerge until the late ’70s or early ’80s. Of course, it has gone through 
several iterations and now means something quite different. The idea of the 
viewer participating in the behavior of the artwork hadn’t been formalized yet. 

On the other hand, to the people who were building synthesizers, it was part and 
parcel of the territory, continuing the tradition of bongo drums and clarinets. 

So while the Pygmy Gamelan emerged from that womb and inherited several 
capabilities of its ancestry, the notion of interactivity was somewhat differently 
constructed; you could affect the artwork but not quite control it. Nobody could 
become a virtuoso. So clearly, I had made my desired move away from the 
experimental musical instrument and gained interactivity and installation in the 
bargain. I suspect that the reason we accepted the idea of interactive art at that 
time has something to do with the general problem of making art in a democracy, 
outlined by de Tocqueville in the 1830s. 

31. Email with the author 6/4/2019. The existence of 6 to 12 copies of the Pygmy Gamelan installation- 
system is originally mentioned in Chiba (1997), and developed in Pritikin (2012): “I had by then made about 
a dozen of them, each playing in a different tuning. It was Jim Pomeroy [a San Francisco performance artist 
and sculptor, 1945-1992] who suggested that I make an installation of all of them together—that they could 
move away from individual sound sculptures into the realm of an installed environment. I went on installing 
them for about three years in groupings inside galleries, where they would infect the space.” 


This project illustrates clearly the importance of a circuit schematic and a recording 
of the original system in operation close to its time of manufacture, but also the level of 
skepticism with which written information should be read prior to understanding the context 
of any given work. It also highlights the deep meaning imbued in these experiments by 
their authors, and their various modes of intuitively transmitting these meanings to their 
audiences via publicly accessible art. As with many studies, much work remains to be done 
before understanding all that went into this work but this level of detail cannot be taken 
further without schematics or close examination of the circuits of these other undocumented 
editions mentioned by DeMarinis. In relation to Ghazala’s concept of “theory-true,” practices 
(Ghazala 2005, 12), we have certainly gone beyond the usual scope of circuit-bending, and 
yet it is clear how reverse engineering, re-engineering, and generative speculations might fit 
within or serve a circuit-bending practice. 

5.4 Ralph Jones’ 1978 Star Networks At The Singing Point 

Ralph Jones was one of the founding members of David Tudor’s “Composers Inside 
Electronics” group. An active experimental musician throughout the 1970s, he studied com¬ 
position with Lejaren Hiller, Julius Eastman and Morton Feldman. He also received circuit 
design instruction from Robert Moog (R. Jones 2004, 82). 

His work Star Networks at the Singing Point (1978) is part of the first set of pieces 
developed by members of CIE after their initial meeting for Rainforest IV. It was composed 
for a concert at the Kitchen in New York. It does not simply rely on circuits for its rendition, 
but on the repeated making of circuits. 

This section will detail how, unlike Reich’s phase shifting pulse gate or DeMarinis’ 
circuit for Pygmy Gamelan, there is no canonical Star Network circuit, only a prototypical 
topology- the star network. Each iteration of this topology, within the parameters set by 
Jones’ score, will only temporarily produce acceptable sounds. Together, these sounds form 
the piece. In other words, the work of building successive systems within the performance 
is the piece. 

5.4.1 Materials available 

Contrasting with DeMarinis’ or Reich’s work, there is a significant amount of infor¬ 
mation available on Star Networks at the Singing Point: this is in part due to my own 


89 


involvement in playing recent re-iterations of the piece. In addition to two versions of the 
score and performance instructions, there are two lengthy interviews with Jones roughly 
thirty years apart, both of which discuss this specific work. There are recordings and video 
of recent performances. Furthermore, because there are no “correct” circuits-only interesting 
behaviors of circuits-there is no pressure to pursue a canonical device. Star Networks at the 
Singing Point is an instance where the vernacular use of the “star network” topology overlaps 
with the canonical, engineering meaning of the word. 

5.4.2 Analysis of the system 

This overlap is in some sense the inspiration and title for the piece. Jones became 
familiar with the term “star network” from perusing an engineering terms dictionary compiled 
by the Institute of Electrical and Electronics Engineers (IEEE) and finding the term “star 
network.” In the manuscript for the 1978 version of the performance instructions, Jones 
copies the definition: 

Star Network: a set of three or more branches with one terminal of each connected 
to a common node. (R. Jones 1978) 

This definition is still present in modern iterations of the dictionary (Institute of Elec¬ 
trical and Electronics Engineers 2000, 1100). In an interview with John Minkoswki, Jones 
offers an explanation for the genesis of the piece: 

I’d been interested for a long time in trying to determine as many compositional 
aspects of a piece as I could at the level of electronics design. So, i tried to End 
a way in which to make a potentially collaborative work defined at the circuit 
design level. Late one night I was leafing through the IEEE Dictionary, and I hit 
the definition for ‘star network’, and loved the sound of the words. I just liked the 
poetics of it, and it came to me that one could make complexes of star networks 
that would function in feedback to destabilize oscillating circuits. (Minkowsky 
1980, 1) 

In a 2010 interview with Matt Wellins, amended by Jones in 2020, Jones elaborates on 
the idea at the center of the work: 


90 


That’s what Star Networks is about: arranging a set of passive components so 
that you can freely connect them in an ad-hoc, improvised network. You make a 
sort of web with many nodes, each of which has at least three branches, creating 
a mesh of series/parallel interconnected components. Each branch is a passive 
component like a resistor, capacitor, inductor or variable inductor, diode, pot, 
transformer, or what have you. Then you pick a point to feed and one to pick 
up from, and you connect a high gain preamplifier to it. This makes a feedback 
network around the preamp that has multiple paths through it (R. Jones 2010, 

7 ). 

Complex combinations of components with various nonlinear transfer functions will 
temporarily create complex sonic patterns if placed in feedback loops. Paraphrasing Jones, 
one must patch a complex loop of these components, containing one or more star network 
(three or more components terminating in the same point) and place it in the feedback loop 
of a medium (40-80dB) gain amplifier. These instructions are from the 1978 performance 
notes, following a list of potentially interesting components to use (specifically capacitors, 
inductors, transformers, diodes, etc.): 

Freely connect the passive elements to form complexes of interconnected star 
networks which provide both a number of ports to which external gain and out¬ 
put stages may be connected, and a number of interconnected signal paths, the 
impedance and phase characteristic of each of which varies with frequency. (R. 

Jones 1979, 1) 

The full performance instructions, with a new introductory paragraph, were included 
in issue fourteen of the Leonardo Music Journal in 2004. This inclusion is apt, as few pieces 
capture the ephemeral nature of chaotic feedback circuits as explicitly as Star Networks. 
This new introduction elaborates: 

In Star Networks at the Singing Point , the performer creates analog circuits com¬ 
posed of multiple nodes, each of which has three or more connections—in essence, 
“mazes” having a number of paths through which current can flow. Connecting 
such a circuit in a feedback loop around a gain stage produces an oscillator that 
is inherently unstable. Tuned to what is called in chaos theory a “tipping point,” 
the circuit sings unpredictably of its own accord. (R. Jones 2004, 81) 


91 


In other words, Star Networks is very much a topological composition, in the sense 
that its score prescribes a particular form of network rather than the nature of the network 
itself. It does define the general type of components needed for a successful performance, but 
these are more about sharing lessons from past attempts at the piece than truly prescriptive 
instructions. This reflects Jones’ interest in indeterminacy. Discussing his lessons with 
Lejaren Hiller, Jones details: 

Jerry Hiller was best known for computer music. I had some exposure to comput¬ 
ers in high school, learning to program an IBM mainframe with punch cards, but 
I was never really interested in computer applications for music. To me, comput¬ 
ers seemed to be tools for absolutely deterministic, sample-accurate work, while 
I was going in the direction of indeterminacy. (R. Jones 2010, 2) 

Star Networks is therefore remarkable because it embodies the intersection of Jones’ 
interests in indeterminacy and electronics, resulting in an instruction-based score that is 
both straightforward and effective in turning electronics into improvisational building blocks. 
Tudor’s Rainforest IV was succesful in part because the diagram that acts as its score was 
both clear, concise, and generative, resulting in the myriad incarnations that have occured 
since the original 1973 piece. Star Networks , building off of that, shifts the focus of Tudor’s 
perspective on electronics as part of an ecosystem to an alternate configuration which is an 
ecosystem of electronics on its own terms. 

5.4.3 Interface 

This system is unique in the sense that the components themselves are the layer at 
which the system is interacted with. Therefore, it is possible to say that these components, 
and their star network configurations, are the interface of the system. Playing the system 
involves repatching the components (and, to a lesser extent, controlling the gain of various 
amplification stages in a feedback loop). Star Networks is interesting because it collapses 
instrument building, performance and electronics all into one. 

5.4.4 Performing Star Networks at the Singing Point 

This shifting of the instrumental assembly as the performance itself, rather than prior 
to the performance, resonates deeply with my own interests as a composer. The implications 


92 


of this piece have become clearer as I have had the chance to perform this piece with Jones. 
This, 1 would later realize, is typical of students of David Tudor - as Phil Edelstein and 
John Driscoll explained the first time I met them at the School for Poetic Computation in 
Manhattan in 2015, Tudor’s musical practice is best preserved by being played and expanded 
upon. 

The event we met for was the first time 1 performed Star Networks. Upon invitation 
from Ron Kuivila, another CIE member, 1 was offered an opportunity to do some research 
on the tube preamplifiers preferred by Jones for Star Networks. This was because of my 
past research on vacuum tube circuit design (which was the main topic of my undergraduate 
thesis). Although this research in tube amplification was not used because the preamplifiers 
contained in a Mackie 1202 mixer and tube preamplifiers happen to be sufficient for this 
piece, Kuivila offered I take his place in a concert. This event which would follow talks by 
some of the current members of CIE at the Avant.org event “Circuit Scores: Electronics after 
David Tudor,” organized by Charles Eppley and Sam Hart with the support of Taeyoon Choi 
and the School for Poetic Computation (Charles Eppley 2016; Choi 2016). 

On March 28th 2018, I was lucky enough to participate in the 40th anniversary event 
of the Composers Inside Electronics ensemble at the Kitchen in NYC, marking four decades 
since their original show there, which included the premiere of Star Networks. Jones invited 
me to participate in this rendition based on the interest I had expressed when performing 
at the School for Poetic Computation two year prior. Jones provided us with updated 
performance instructions, part of which are included in figure 5.9. These, overall, varied 
from the original notes in three major ways: a larger ensemble, a shorter turnaround time 
for the performance of each feedback loop, and a shorter overall performance time. The 
underlying premise remained, however, identical. 

Although most people used breadboards, and Ralph used custom banana plug connec¬ 
tor strips for his collection of passive components, my A channel used a dead tube amplifier 
donated by a friend while I was at Hampshire College and my B channel used a collection 
of random capacitors, diodes and small signal transformers I’ve accumulated over the years. 
There is an autobiographical aspect to this piece. These are connected using the potatoes 
in figure 5.10, which act as resilient and multi-use electrical conductors / connectors. 

This was a large ensemble for the piece: the ensemble was directed by Ralph Jones, 
but also included Ron Kuivila, Matt Wellins, John Driscoll, Phil Edelstein, Michael Johnsen 


93 



Figure 5.9: The connection diagram provided by Jones for performance of Star 

Networks in 2018. Used with permission. 

and myself. Ralph, Ron, John and Phil are all original members of Tudor’s CIE ensemble, 
with John and Phil having worked in the last decades to keep Tudor’s work alive along their 
own. 

5.4.5 Conclusion 

Star Networks is a co-construction of electronics concepts and performance: it requires 
both a theoretical understanding of the geometry being prescribed by the instructions and 
the ability to collect components which will lead to interesting feedback patterns. In that 
sense, with much of the pieces made and around CIE, it is a precursor to the work of later 
circuit-performers, such as Darsha Hewitt, the trio Loud Objects (Kate Shima, Kunal Gupta 
and Tristan Perich) or Martin Howse. 

Performing Star Networks can be harrowing because even minute variations in each 
feedback loop circuit can have drastic effects on its “tipping points” and their sonic outputs. 
This is particularly difficult to manage as a solo piece: this is why Jones later updated 
the instructions so the piece would involve as many as seven people. In its latest itera¬ 
tion, Star Networks fixes a rather frenetic performance practice of cuing up temporary and 
unpredictable sound making systems out of circuits, in a manner opposed to Reich’s thor- 
































94 



Figure 5.10: My setup for performing Star Networks at the Singing Point with 
Composers Inside Electronics, The Kitchen, NYC (March 2018). 


oughly deterministic score for Pulse Music, and distributes responsibilities across multiple 
performers to do what they can to control and push timbres or patterns into interesting di¬ 
rections. Furthermore, even if Pygmy Gamelan is a piece fully determined by its schematic, 
its autonomous nature places human agency more strictly in the craft stage of the device’s 
assembly rather than the performance. In Star Networks, the assembly process is the perfor¬ 
mance. These three pieces illustrate, even within their common origin in the experimental 
music scene of New York City in the 1960s, a wide range of possibilities within “circuit 
music.” 

Nicolas Collins, himself a later member of CIE, writes about Star Networks in the 2007 
Cambridge Companion to Electronic Music: 


Some participants were naifs or muddlers who designed beautiful, oddball cir- 



95 


cuits out of ignorance and good luck. Ralph Jones encapsulated this spirit in Star 
Networks (1978), which asks performers to build circuits on stage according to a 
configuration that forces almost any selection of components into unpredictable 
but charming oscillation, neatly bypassing any need for a theoretical understand¬ 
ing of electronics on the player’s part. In defiance of the conventional wisdom of 
using oscilloscopes and other test equipment as a visual aid to the design process, 
in Star Networks the instruments are designed by ear alone, and the audience 
follows every step of the process by ear as well. (Collins 2007, 47) 

Elaborating on Collins’ description, the instructions for the piece clearly state that 
if possible, unstable sound configuration should be cued on a local mix (we did this with 
headphones in both instances) before being fed into the main mix. The audience is not 
supposed to hear the tuning process, although in practice this happens a number of times 
as we “fix” unexpected developments of a specific configuration that was interesting for less 
time than expected. 

Since the original conception of the piece as an hour long indeterminate process (as is 
recommended in the original version of the instructions), Ralph has changed his preferences 
to include a slightly larger ensemble (6 instead of 4) and a much shorter time span: each 
unstable system should be played for 2-3 minutes after being tuned, and the overall piece 
should last 20 to 25 minutes. 

Jones, working up to the initial performances of his work in 1978, developed a collection 
of passive components for this piece, from a point-contact galena diode, to various heavy 
duty capacitors, diodes, transformers, and more. His invitation of performers with various 
levels of expertise (with the piece, and electronics in general) does make it a success in terms 
of being intellectually accessible. Collins’ comment regarding ignorance should be read as 
a proof of accessibility and the likelihood of amazed surprise, rather than a mark of willful 
anti-intellectualism. Jones did study circuits with Robert Moog, and he is one of the few 
composers of CIE who “accurately” used an engineering terms as a title for a piece. 

On a melodic, rhythmic and timbral level, the piece relates to Tudor’s work, with 
Star Networks ’ erratic chirps, thumps and squelches sounding like an electronic choir of 
birds, complementing the electromechanical resonances of Rainforest IV or the digitally 
controlled neural networks of Neural Synthesis. Musical applications of electronic feedback 
loops are common in David Tudor’s work, however, none of Tudor’s work explicitly prescribe 


96 


a building and rebuilding of the feedback loop in the way that Jones’ text score does. In this 
sense, although Jones’ work is meaningful and recognizable within the work of Composers 
Inside Electronics and Tudor’s heritage, it also stands as a unique and distinctive piece, one 
which perhaps most literally acknowledges the impermanence of our technology and of all 
the music implicit in it. Reverse engineering, in this context, can act as documentation for 
future developments by simply explaining the techno-cultural factors which rationalize the 
work’s aesthetic priorities. 

5.5 Tamara Duplantis’ 2015 Downstream 

Tamara Duplantis is a composer and programmer, currently a doctoral student at the 
University of California at Santa Cruz in the Computational Media program. Her 2015 
piece Downstream is composed for the Nintendo Game Boy environment. It runs either 
as software for custom Game Boy game cartridges, or in one of the numerous emulators 
which allow users to play Game Boy games on more traditional personal computers. Mark 
Marino, detailing the implications of “reading” code, writes: “Critically reading code does 
not depend on the discovery of hidden secrets or unexpected turns, but rather examining 
encoded structures, models, and formulations; explicating connotations and denotations of 
specific coding choices; and exploring traces of the code’s development that are not apparent 
in the functioning of the software alone.” (Marino 2020, 17) With this in mind, this section 
presents an interpretation of Duplantis’ work and the context it was assembled in. 

5.5.1 Materials available 

The game is sold as pay-what-you-want software. Purchasing it provides the source 
code and an executable hie made of ROM (read only memory) for such an emulator, named 
Visual Boy Advance. The source code, contained in the hie named “Downstream.c” is made 
of 128 lines of C code. The ROM hie is obtained from the source code by compiling this 
C code hie with the libraries included in the Game Boy Development Kit (GBDK), an 
unofficial toolkit unsanctioned by Nintendo (the original designers, makers and sellers of the 
Game Boy systems). This toolkit was developed by a loose set of enthusiast communities 
to facilitate the development of new Game Boy programs outside of Nintendo, and was not 
meant to replicate original game programming mechanisms (GBDK 2020). Downstream is 
one example of such new programs, amongst hundreds made even since the original Game 


97 


Boy has stopped being manufactured in 2003. 

Running the emulator (“VisualBoyAdvance-M.exe”) allows the option for the user to 
select and run the .gb hie, which executes the game. This game contains both visuals and 
music. After a brief introduction sequence, the screen displays a constant stream of text 
characters and sound, with the characters generally moving up and to the left (or, with 
the screen printing new characters at the bottom right). The user can interact with the 
sounds and visuals using the directional pad controls (up, down, left and right) as well as 
the standard A and B buttons from the Game Boy. Although pressing buttons can clearly 
be correlated with changes in the game’s behavior, it is not exactly clear what the logic of 
the controls, at least not at the beginning. Looking at the source code, as will be done in th 
next subsection, will clarify that. 

Beyond the source code, there is not much information publicly available about the 
work. Tamara Duplantis is however active in online digital musics and programming com¬ 
munities (this is how I became aware of her work), and asking her questions relating to 
Downstream, has been relatively straightforward. 

In this particular case, it would seem that the work, the system and the piece are 
strongly correlated. This is a common side-effect of using computer code: because Down¬ 
stream is defined entirely by the combination of the source code and the platform on which 
it operates, the source code fully defines the piece. Changing anything about the piece 
involves changing the system. Nevertheless, Downstream is also generative, so individual 
performances of the piece maintain some level of variability. As she discusses in her re¬ 
sponses to my questions, this is very much intentional and the result of a curiosity in the 
parallel between playing a game and playing an instrument. 

5.5.2 Analysis of the system 

The game opens with the following three sentences, printed to the screen of the Game 
Boy one after the other: 

“You are floating downstream.” 

“The waters rush around you.” 

“You are lost.” 



98 


Accompanying each sentence is a burst of white noise, as rendered by the Game Boy’s 
digital sound synthesis system (called the GBSS, or Game Boy Sound System). 

The content following this introduction is generated programmatically using a single 
loop of code, which contains conditional statements describing the behavior of the audio and 
visual components of the work. 

At the root of the loop is a while statement regarding the user-defined variable, “pitchl.” 
This variable is the main value in this process work, as when it reaches 1496, the game is 
made to silence itself. 

The rest of the loop contains four if and else statements, describing how exactly the 
pitchl variable might reach 1496, and the various other processes that update the screen and 
audio content as it reaches that value. The process is described in detail in figure 5.11. 

From the source code (see appendix E) ft is difficult to distinguish which features of this 
system were designed prior to implementation and which were developed experimentally. In 
my conversations with Duplantis, I asked for both specifics about the piece and the processes 
that led to her development of it. She replied: 

Everything developed experimentally. I was just trying to make a fun little 
accordion instrument and I hit upon this strange system and decided, no, I’m 
going to play around with this instead. It was all experimental. I think if there 
was anything that I brought to it beforehand that wasn’t experimental, it was 
just these general ideas of how the interaction would work. (Appendix B) 

In her responses, a number of mechanisms and technical decisions with deep-ranging 
consequences for the system as a catalyst of a poetic experience were clarified. For example, 
Duplantis describes how her having to learn how to work with the Game Boy coding envi¬ 
ronment as she was making the piece had consequences which significantly affect the overall 
experience of the audience: 

My understanding is that by using an unsigned int instead of ubyte, as the number 
ticks up, it goes beyond the bounds of its own spot in memory and starts pointing 
to other sections of memory in the rom. Then it just reads out the program. So 
as you are watching Downstream, what you are watching is an interpretation of 
the program as it runs through memory and spits out what it finds as gibberish. 

And there are a couple spots where that, to me, is pretty clear, just because I 


99 



reset timer 

Figure 5.11: A logical diagram showing the variables, user inputs and processes 
included in the source code for Downstream and explaining how 
it generates and processes data to produce the visuals and audio. 
Made with assistance from Shawn Lawson. 


played around with the guts of it, right? But for example, so there’s the big intro, 
right? And then there’s this long section of white space where there is apparently 
nothing in those registers. So it just conies up with a big long string of nothing. 
Then after that, it hits this chunk that includes the word Downstream in it. I 
did not intentionally put that there. That is the header information. If I were 
to change the name of the program whenever I compiled it, whatever I compiled 
it as, that would be the text that shows up there. Then later on, once the text 
from the beginning returns, that was also not intentionally put there by me, that 


















100 


was after I had done most of the work on the project. 1 had added this text at 
the beginning to contextualize it a little bit and give it a bit of a narrative. And 
by adding it into the program, it then caused that text to show up near the end 
of the piece, because that’s where that text was stored in memory. (Appendix 

B) 

Discussing the set of hard-coded values in the code, Duplantis details how experimen¬ 
tally derived decisions reflect the acquisition of an intuitive knowledge of the Game Boy’s 
responses to specific coded commands in this learning process. For example, the values 856 
and 64, line 109 and 110 of the “Downstream.c” hie (detailed in Appendix E), were the result 
of noticing the re-occurrence of a silent section every time the game-instrument was played. 
When pitchl reaches 856, this silent section can be shortened by a jump ahead in a multiple 
of 8: experimenting with values not multiple of 8 resulted in unwanted stuttering rhythms. 
This is due to the 8 bit nature of the system: when skipping ahead by some multiple of 8, 
the rhythmic patterns underlying the system remain in sync with each other (see appendix 
B for Duplantis’ explanation and a thorough discussion of each variable). 

As she writes, “everything is so interconnected in this piece.” The diagrams in this 
subsection document the process of making the code’s textual logic more immediately per¬ 
ceivable to audiences less comfortable with the Game Boy code syntax. Commenting on the 
validity of the above diagram, Duplantis shares further valuable details about her perception 
of her own work: 

It’s not quite the way that I think about the piece because I guess whenever I was 
constructing it, I was thinking of it a lot more intuitively. I wasn’t as concerned 
with the specific numbers. But yeah, that looks like what the program does, 
especially the second one, just because I feel like Downstream is much more 
linear piece. The Everything Shuts Down branch, I don’t think of that as being 
this whole sweeping other - it is really just whenever it hits that spot, it just 
turns off. I’m looking back at the first one right now... Yeah, I like the second 
one more because it kind of gets across the linearity of the piece and of the loop 
and it has just all the different functions off doing their own thing. I’m going to 
send a diagram of how it is for me. 


Figure 5.12 is the first diagram I had sent to Duplantis. It attempted to apprehend her 


101 


code as more of a cyclical process, based on my initial read of the C-like Game Boy code, a 
language and coding environment 1 was not familiar with prior to this case study. 



Figure 5.12: My initial diagrammatic representation of Downstream. 

Figure 5.13, below, is the diagram provided by Duplantis after seeing my diagrams of 
her work above. It transposes the linear logic represented (relatively accurately, according 
to Duplantis) in figure 5.11 into the abstract state defined by the pitchl variable on the x 
axis and a “duration line” y axis representing the length of the text getting printed to the 
screen. This diagram reinforces Duplantis’ statement of the piece as a linear process linking 
sound and displayed text with an underlying sequence of events, with room for interaction 
as defined by the pitch wheel and the notes on the upper right quadrant of her figure: 

Downstream is a valuable presence in the handmade electronic music landscape because 
of how it brings ideological and material connections to gaming as a mechanism with which to 
mediate the interactions between human, instrumental system and artistic work. Duplantis’ 
interest in combining games and instruments has historical reasons: 

Very early on in my musical - electronic music - education, I guess, saw this big 
connection between the interaction that you would have with a virtual instrument 
and the kinds of interaction that you would find in video games. So I was 
using video games very early on as inspiration for my interaction design for 




















102 



my instruments. And as I continued making them, I began incorporating more 
elements. (Appendix B) 

In other words, the interface - the locus of expression between an instrument and its 
performer (if not its audience as well) collapses onto the artwork in productive ways when 
that instrument and the system-work are strongly intertwined. 

5.5.3 Interface 

The affordances and constraints of the Nintendo Game Boy system and its various 
emulators can be summarized as limited and distinctive. Here, some medium-specific back¬ 
ground is illuminating: most games for the platform were designed to be played using six 
buttons (a directional pad with up, down, left and right buttons, as well as two additional A 
and B buttons) and a low resolution dot matrix screen. Nevertheless, the popular appeal of 











103 


the gaming console cements it as a technological system which defined industrialized cultures 
in the late twentieth century. Discussing the design philosophy of Nintendo engineer Gunpei 
Yokoi, Kenneth McAlpine writes: 

Hardware design was not just about making the most powerful device possible; 
by considering hardware, software, and user requirements together, he was able 
to create a device that was accessible and cheap, both to buy and to play. And so 
it was that the less technically able Game Boy put video gaming in the pockets 
of the people. (McAlpine 2018, 178) 

Although its simple controls and screen placed it squarely in the hand-held device 
market, it was unique in the sense that its sound capabilities were closer to that of Nintendo’s 
Entertainment System (or NES) which had been a popular home console prior to the Game 
Boy’s release in 1989 (177-178). Since the release of the Game Boy, however, personal 
computing has evolved in a way that make the Game Boy’s visuals and sound system, 
with their low-resolution characteristics, desirable not for convenience but for the particular 
timbres and modes of interaction it offers. 

This is also true of Downstream. Whether it is played on the original Game Boy using 
a custom cartridge, or on an emulator which will simulate the original platforms limitations, 
the player is presented with a small square screen and 6 buttons to play the game/piece. In 
other words, the interface here in large part determined by the gaming console it operates 
within (or pretends to operate within for emulators). As the player is presented with the flow 
of characters that defines Duplantis’ work, they are able to affect the visual and auditory 
information being presented at them by pressing on these buttons (as detailed above) and 
experiencing the consequences. 

For me it was very important to have some kind of visual that imparted some 
understanding of the system to the audience in an intuitive way. And it turns 
out when you make a musical instrument that has this intuitive narrative visual¬ 
ization and you package it in a way that’s easily distributable for people to pick 
up and play, it kind of looks like a video game. So I just started calling my things 
games. I just started calling my work “games.” 

An in-depth discussion of the role of games, gaming, and game-like activity in music 
does not fall within the purview of this dissertation. In the context of Duplantis’ work, 


104 


documentation of the interaction between the medium, the artwork, the system and its user- 
maker are the priority here. Having elucidated some of the mechanisms by which these 
elements are related, I’ll discuss why I did not develop a recreation or variation on the piece. 

5.5.4 Playing Downstream 

In this case, reiterating and playing the piece is as straightforward as running the 
emulator to hear and see the piece play out, interactively, on the same screen that I am 
writing this document. Returning to Chikofsky’s definition of reverse engineering as wanting 
to “increase the overall comprehensibility of the system for both maintenance and new de¬ 
velopment,” (Chikofsky and Cross 1990, 16), the above information, combined with relevant 
interview transcript, begins to work towards both maintenance and new development. On 
this topic, Duplantis states: 

I also unfortunately can’t work on it anymore. I can’t add to it because changing 
the program also changes the content of the piece. It makes it very hard to do 
any kind of edit to it whatsoever. And not just that, but also the way that 
it compiled. Part of that was specifically the way that it compiled on my old 
laptop, because my current laptop does not compile the same way. I have the 
same software, I’ve tried compiling it the same way, but for whatever reason, on 
my computer, my current laptop on Windows 8, it compiles it differently than 
it did when it was my old laptop on Windows 7. And it just doesn’t work the 
same. So I can’t change it right now. Downstream is just what it is. That code 
is just where it is. 

The piece benefits directly from the desire to offer video game players a relatively 
consistent experience across users, and simultaneously, the affordances and constraints of the 
medium have frozen the piece in time. Downstream stands as a clear example of a technical 
art object in which the boundaries of game design and instrument design are blurred, even 
though that is what makes the work recognizable both implicitly and explicitly. 

In that sense, I have not felt the need to modify or otherwise re-implement Downstream 
in order to reverse-engineer it in my analysis. The extensive details shared by Duplantis in 
her response, even years after coding the piece, more than make up for any interactional 
expertise I may have acquired attempting to build the compiling chain for a modification 


105 


of the source C code. Downstream is clearly a work of handmade electronic music, having 
literally been made by hand by Duplantis to explore the space between game and instrument. 
Extending my own terminology, here the game-instrument and the work are closely correlated 
once more. 

5.5.5 Conclusion 

Downstream, is played both as a game and an instrument. A co-construction of mul¬ 
tiple interaction paradigms (music and game), it offers a unique extension of handmade 
electronic music into gaming. In a performance and interview for the Indexical label in April 
2020, Duplantis named such systems “performance games” and detailed the community and 
practice building around them as she and others, sometimes in groups, perform with them 
(other works include Atchafalaya Arcade, from 2017, or For Today I Breathe Again, 2019). 
In that same web-cast, Duplantis stressed the importance of two things: first, her interest in 
working with technology that may be perfectly functional even if not part of the latest gen¬ 
eration of devices, such as Game Boys. It is precisely because of their obsolete but culturally 
significant status that most contemporary computing platforms have Game Boy emulators - 
making Duplantis’ game more widely distributable than if it had operated natively on those 
contemporary computing platforms. The second thing she stressed is the personal nature of 
her performance games and instruments: first, in these sense that making them and making 
music with them is a cathartic process, and second that a significant amount of care goes 
into making sure this personal potential can be intuited by her audience when they play 
her solo or multi-player games. Although this personal connection has very specific means 
of implementation in Duplantis’ work, community culture is, to a certain extent, a common 
thread to Downstream, Star Networks at the Singing Point, and Pygmy Gamelan, in their 
interpersonal imaginations of technology. 

It is more difficult to identify canonical from vernacular technical decisions in the source 
code for Downstream than in circuit schematics because Game Boy games were never meant 
to be developed by programmers outside of Nintendo. Although there are numerous guides 
which enable one to program for the Game Boy, it is important to consider what exactly 
canonical and vernacular might mean in this context. The question can be formulated as: 
what are the common practices of those who designed the Game Boy, and what are the 
common practices of those who hack it? 



106 


In this particular case, it would seem that Duplantis falls in the latter category. As 
she describes in her interview responses, a significant portion of the technical decisions were 
experimentally derived. In effect, Duplantis had to reinvent Game Boy programming for 
herself in this learning process, resulting in a number of idiosyncratic mechanisms within 
this self-referential multimedia process piece. 

And yet, by opening the source code to the active community of hackers and enthusiasts 
that this piece partially cemented her presence in, Duplantis also contributed to the making 
of an alternate language of computer music within the Game Boy community. 

5.6 The music implicit in 3D printers: Stepper Choir 

Stepper Choir is my exploration of the music implicit in contemporary 3D printing 
technology and the associated modes and reifications of thought. Working with my previous 
nomenclature, it is a work comprising of a system and a set of compositions and sound 
installation pieces exploring the musicality inherent to 3D printing. At the core is a system, 
also called Stepper Choir which attempts to spatialize the sound of stepper motors in 3D 
sound system in order to have the movements of the sound mimic the shape of a digital 
object as it gets printed. 

3D printers are driven by 3 stepper motors, themselves directly controlled by the fre¬ 
quency of a square wave coming from a on-board controller. These stepper motors allow an 
extrusion nozzle to move in three dimensions to deposit plastic in a series of layers. This ad¬ 
ditive, physical synthesis process is inherently musical because the software (called a sheer) 
which determines the speed and direction of the motors to operate in these patterns (called 
slices) establishes a one-to-one relationship between movement and pitch. In other words, 
the shape of the object and its internal structure are very strongly correlated to the pitches 
produced by the stepper motors as they work to print said object. 

Throughout my exploration of this relationship, I designed musical compositions for 
this printing system. There were three phases to this project: the initial residency at the 
Sporobole Gallery in Quebec in summer 2018, funded by a grant provided by the Arts 
Services Initiative of Western New York, my installation and performance of the system in 
December 2018, and my residency in EMPAC’s Studio One, funded by a production grant 
provided by the Rensselaer School of Humanities, Arts and Social Sciences. A number of 
small performances with the system also took place. 



107 


5.6.1 Starting materials 

My first experiments with this were on a Makerbot Replicator Mkl. 3D printers consist 
of the three stepper motors mentioned earlier, but also include a number of intermediate 
computer control layers which translate the instructions provided by the sheer algorithm to 
the series of pulse width modulated signals required to drive the motors. In addition, there 
is also a series of hardware and software controlling the heating and cooling of the plastic 
deposited by the extrusion nozzle (this is less relevant in this case).On an Mkl, the on-board 
controller uses the .3sg machining instructions language as an intermediate step between 
g-code and generating the control signals for the stepper motors. 

As the motors turn on and off according to the square wave, they effectively buzz at 
the frequency which corresponds to their speeds. Stepper motor speeds in the machine I 
used to prototype this project seemed to go between the lower teen values to a few thousand 
- the lower third or half of human hearing. As such, one can listen to a 3D printer and have 
a rough idea of what it is doing. 

This is not the first project to use 3D printers as instruments - some have written 
python utilities which take standard, simple MIDI hies and turn them into print instructions 
(Tim 2009; Westcott 2015). These projects are themselves indebted to various computer- 
controlled printer orchestras, such as the dot matrix ensemble assembled by the Canadian 
group The User in the late 1990’s (The User 1999). Part of this is that historically, various 
other electromechanical systems have been misused to make music: a brief internet search will 
show people building ensembles of floppy disk drives (which also contain stepper motors) to 
play various pop culture hits, from Star War’s Imperial March to Beethoven sonatas. Stepper 
Choir is also not the first project to combine an amplified 3D printer with multichannel 
sound, with Stanford composer and researcher John Granzow performing versions of the 
piece Vox Voxel, for “3D printer, real-time processing, and ambisonics” as recently as 2014 
(Granzow 2020). 

5.6.2 Analysis of the system 

However, as far as I know, Stepper Choir is unique in the sense that it correlates the 
movement of a single sound source in physical space with the movement of the 3D printer’s 
nozzle in the print space. As I detail below, this correlation is never fully accurate because 
of limitations in the communication protocols between printing software and the on-board 


108 


microcontrollers found in all commercial 3D printers. Nevertheless, since the beginning of 
this project in summer 2018, I have developed or adapted (with help) a set of tools that allow 
anyone to work with 3D printers as a source of sound in a way which enables the system to 
become a compositional tool. This section describes the work done to that end and the art 
produced as part of this experiment. 

Putting contact microphones on each motor allows a user to amplify the sounds pro¬ 
duced by a 3D printer. I tend to use three (one for each axis) running to three three 
preamplified channels on a mixer before being summed and sent to a digital audio interface 
so computer can control the spatialization process based on the print instructions. Develop¬ 
ment for this project happened in two separate residencies. I’ll begin by describing the design 
process undertaken at Sporobole before detailing the developments enacted at EMPAC. 

To take advantage of Sporobole’s 16 speaker setup, I worked with the house engineers 
to setup two overlapping rings of 8 speakers. G-code, also called RS-274 or General code, 
encodes toolpaths, defining where any given tool is supposed to go using either relative or 
absolute coordinates. G-code’s age becomes obvious when one realizes that there is very 
little afforded in terms of information feedback from the machine - most 3D printers have 
no way of telling where the printer head is unless it hits one of the “end of travel” detectors. 
Even then, there is no way to program in G-code an instruction which stops the print if 
it does trigger one of the detectors. G-code instructions are sent to the microcontroller 
embedded in the automatic machining tool at hand and, in consumer 3D printers, it is up 
to the user to monitor and prevent any failures. This general design paradigm makes it 
effectively impossible to control anything in real time without controlling things from the 
machine’s embedded microcontroller, because the computer connected to it does not tend 
to receive confirmation that a movement has been executed properly, let alone when it has 
been executed. 

Notwithstanding this roadblock, it is nevertheless possible to use G-code to control 
sound spatialization systems. G-code encodes toolpaths in three dimensions. A speed can 
be set for each motor for each moving instruction. The typical syntax for a typical move 
command is: 

G1 X-0.86 Y-0.55 Z0.41 F130.516 El.005 

With G1 being a movement, the values after x, y and z being (in this case) the rela¬ 
tive movement in inches, F being the feed rate (a common value for all motors, which are 



109 


1 

2 

3 

4 

5 

6 


calibrated individually) and E, the feeding rate of plastic in the extruder. For M72 PI (the 
piece played at Sporobole) and Multichannel Motor Music (the piece displayed and recorded 
at EMPAC), no plastic was printed. For Stepper Choir , although plastic was printed, the 
extruding motor was not amplified, so the E value is left untouched after being generated 
by the slicing algorithm. 

Sheers are another set of software tools which link 3D models (generally provided as 
stereolithography hies, or .stl) to g-code. Since 3D printers simply print plastic in 2D shapes, 
the slicer slices a 3D model into the appropriate series of 2D paths, while also caring for the 
integrity of the final result by providing internal and external supports for the plastic to cool 
down without drooping out of shape, or only bending within acceptable limits. 

Because I neither knew how to spatialize sound in multichannel systems nor use a 3D 
printer, I used my Sporobole residency to tinker with both simultaneously. I realized that 
if I could process G-code into three separate streams, one for each coordinate with a value 
for how fast to arrive to that value, I could feed each of those streams into a text object in 
MaxMSP. The x and y feeds could be automatically fed into a cartesian to polar coordinate 
converter. Although no stock 16 channel panner exists in MaxMSP, I was able to find an 
external programmed by Zachary Seldess to pan effectively across 8 channels. The Z stream 
simply feeds into stereo panners for each of the 8 channels, resulting in a 16 channel panning 
system. 

The sheer used throughout this project is called Skeinforge. It produces G-code set 
with absolute coordinates which simply need to be parsed into a [value, time to value] pair 
per coordinate in order to be effectively processed by the MaxMSP text object (see appendix 
F). For example, these few lines of g-code: 

G1 X-0.86 Y-0.55 Z0.41 F130.516 El.005 
G1 X-0.8 Y-0.75 Z0.41 F130.516 El.015 
G1 X-0.7 Y-0.82 Z0.41 F130.516 El.021 
G1 X-0.53 Y-0.84 Z0.41 F130.516 El.029 
G1 X-0.43 Y-0.77 Z0.41 F130.516 El.036 
G1 X0.24 Y-0.78 Z0.41 F130.516 El.06 


would be turned into three separate lists (one set of instructions for each axis): 


X 

2 


3 


-0.86 130.516 


110 


4 

- 

00 

O 

130.516 

5 

- 

0.7 

130.516 

6 

- 

0.53 

130.516 

7 

- 

0.43 

130.516 

8 

0 

. 24 

130.516 

9 




10 

y 



11 




12 

- 

0.55 

130.516 

13 

- 

0.75 

130.516 

14 

- 

0.83 

130.516 

15 

- 

0.84 

130.516 

16 

- 

0.77 

130.516 

17 

- 

0.78 

130.516 

18 




19 

z 



20 




21 

0 

.41 

130.516 

22 

0 

.41 

130.516 

23 

0 

.41 

130.516 

24 

0 

.41 

130.516 

25 

0 

.41 

130.516 

26 

0 

.41 

130.516 


The code, developed in cooperation with Laurent Herrier, is linked to in appendix 
F, along with the MaxMSP patch used for the custom spatialization engine, and the OSC 
pipelines used to feed into SPAT in order to control EMPAC’s ambisonics array. 

The takeaway for this project was developing a system that allowed the sound to be 
spatialized using 16 speakers in paths mimicking the movements of the printer head inside 
a 3D printer. 

The goal was to show how the computer processes developed to turn a digital three 
dimensional shape into a physical object established a one-to-one relationship between man¬ 
ufacturing and music. In other words, I wanted people to hear the musicality not intended 
but inherent in the 3D printing process. 

M72 PI was a 5-part, 15 minute suite of free form sonic collages arranged in g-code. 
Music of the Spheres was a week long installation piece documenting the painful and me¬ 
thodical slowness of complex print jobs, specifically through the use of Blender (an open 


Ill 


source 3D modelling program, (Blender Foundation 2020)) and its randomize function, and 
ReplicatorG’s infill parameter (ReplicatorG was the printer control software used with the 
Replicator Mkl at Sporobole (ReplicatorG 2020)). 

The title Stepper Choir comes from the fact that the three stepper motors allow for up 
to three simultaneous voices of characteristic square-wave sound sources. Garret Harkawik 
provided stems and assistance in composing some of the MIDI works for the printer. 

Just like most of my recent work, I developed this system to be both an instrument I 
could perform “with” or “along,” as well as a standalone installation. I called the performed 
version iterations of the piece M72/P1 , after the computer code used to make a Makerbot 
Replicator Mkl make the melody that indicates a print has been completed. I called the 
installation Music of the Spheres because the idea of the project originally came from me 
hearing a Makerbot “Cupcake” printer print a sphere in college. Witnessing this, I was 
disappointed by the quality of the misshapen plastic artifact produced, but impressed by its 
musical potential. 

M72/P1 was performed three times: once at an event in the Rensselaer Russel Sage 
Dining Hall’s reception room, a second time at the opening event of the gallery show at 
Sporobole in Sherbrooke, Quebec, and the third one as part of the Fall graduate art show at 
Rensselaer in West Hall room 108. The first and third iteration of the piece saw it performed 
for 8 channel rather than 16 because of what was available in those settings. 

In M72/P1, the printer is not producing an object, but rather is simply translating 
MIDI information into motor movements to “play” the pitches contained in the MIDI hie, 
using a third party python script (Westcott 2015). The MIDI hie is a combination of public 
domain MIDI adaptations of popular songs (including the traditional Italian tune “Bella 
Ciao”) and compositions made specifically for the printer by Garret Harkawik. 

I considered the printer as an instrument, playing the feedback network between the 
contact microphones it used to amplify the motors and the 16 channels of audio surrounding 
it. I also played with the body of the printer using concert triangles, rasps, and other objects. 
As a hrst investigation into the potential of the object as a resonant and musical body, this 
seemed particularly fruitful, although it felt most realized when contrasted with the harsher 
tones of the electrical feedback loop happening in the mixer with a distortion and delay 
effect, processing the sound output of the printer and re-injecting it into both the mixer and 
the contact microphones via the speakers. 


112 


Another aspect I experimented with in the third iteration of the system, at the Rens¬ 
selaer Art Department Graduate Fall Show, was with the possibility of performing “ghost” 
shapes: the idling machine produces a quasi-constant but always evolving set of small noises. 
By controlling the spatialization with the paths corresponding to the print of a real object, 
but leaving the printer at an idle state, only the noises of the machine at rest are sufficient 
to hear the shape evolve. Unfortunately, this is particularly difficult to capture in stereo 
recordings. 

Music of the Spheres , (the installation version of the piece) was on display at the 
Sporobole gallery in Sherbrooke, Quebec, Canada, from Dec. 8th 2017 to Dec. 21st 2017. In 
this, a series of spheres represent successive iterations of Blender’s “randomize” function. By 
printing each iteration and spatializing its printing noises, one can intuit both the algorithm 
slicing the spheres and the algorithm randomizing those slices. At a low level, this corre¬ 
sponds to an experiential understanding of the “tricks” implemented by slicing software to 
produce objects relatively close to the models they’re based on, at a higher level, this piece 
shows how musical structure and physical structure are both artificially constructed, a set 
of compromises due to the materiality not just of the plastic or the printer, but also of our 
own perceptual boundaries and how our digital tools negotiate those boundaries. 

This project, effectively between an etude of the system I developed for it and a fully 
realized piece, shows (to me) the potential of this unintended but very strong connection 
between materiality and musicality. 

From each of these .stl hies, a set of geode is generated with 0 and 40% infill, resulting 
in 50 g-code print instructions. Starting at randospherelO, an additional set of g-code hies 
using the “exterior support” and “full support” are generated, resulting in 30 more g-code 
print hies. These 80 print hies were provided to the Sporobole staff, along with detailed 
instructions relating to using the software, for printing over the course of two weeks. As the 
prints were completed, they were added to pedestals below the print of the digital models 
they related to, which had been hung up on the walls. Although the entire series of designs 
was not printed, the system operated continuously for two weeks, owing to the stewardship 
of the gallery staff. 

The takeaway for this project was developing a system that allowed the sound to be 
spatialized using 16 speakers in paths mimicking the movements of the printer head inside 
a 3d printer. The goal was to show how the computer processes developed to turn a digital 



113 


three dimensional shape into a physical object established a one-to-one relationship between 
manufacturing and music. In other words, I wanted people to hear the musicality not 
intended but inherent in the 3d printing process. 

Multichannel Motor Music expanded the Stepper Choir system to take advantage of 
a thirty-seven channel ambisonic speaker array at the Experimental Media and Performing 
Arts Center (EMPAC) at Rensselaer in Troy, N.Y. Adapting my system to this new out¬ 
put configuration was straightforward because I already had a mechanism which produced 
absolute cartesian coordinates. Therefore, even though I had to send data to SPAT, the 
spatialization engine built for MaxMSP by IRCAM, the only required modification was the 
addition of a set of OSC routing paths which allowed the computer running SPAT to receive 
data from my own machine, which produced the spatialization commands. 

The purpose of the EMPAC residency was to replay, record and experiment with new 
compositional ideas with the Stepper Choir system. Therefore, I composed a new work 
specifically for the ambisonic dome to perform as a public event. As mentioned above, 
composition with this system is done through the design of a 3D object. Working once more 
in Blender, I designed an object by trial and error, collating shapes that I thought sounded 
coherent together and iterating through a number of designs in the few days leading up to 
the performance. 

5.6.3 Interface 

The interface for the system is a combination of a MaxMSP patch, the software used to 
control the printer (Replicatorg prior to EMPAC, but Repetier at EMPAC, (Repetier 2020)) 
and the software used to convert g-code to lists in a format appropriate to the MaxMSP 
patch. 

The Stepper Choir system shows parallels with the origins of computer music because 
commands must be processed and calculated prior to performance. The non-real-time aspect 
of the machine puts it squarely into the tradition of early digital synthesis engines, even 
though it used a real-time sound program in the way of MaxMSP. Images the MaxMSP 
interface is included in appendix F. 


114 


5.6.4 Conclusion 

A number of issues exist with the Stepper Choir software that make it occasionally 
unfruitful. Some of these limitations are due to the very mechanisms that enable 3D printing. 
For example, because there is little 2-way communication between the controlling software 
and the printer itself, it is difficult to coordinate visual information with spatialization audio 
information. A limitation of the human ear is that spatial accuracy is uneven in the 360 
degree field surrounding out house, therefore some fast movements in the back of the head 
may be perceived as blurry or missed entirely. 

5.7 Sounds with origins: Kabelsalat 

Kahelsalat is my longest running work. The title of the work is German for “mess of 
wires,” which is usually what my solo shows amount to. Video artist and painter Viktor 
Witkowski, now lecturer in painting at Dartmouth College and a native German speaker, 
used the word after I described my electronic music system for my work in Passive Tones, a 
duo in which I perform and compose along with collaborator Karl Hohn since 2015. 

5.7.1 Analysis of the system 

That system consists of sound sources (in Passive Tones these tend to be the Korg 
Volca Beats, a Volca Bass and a Volca Sample sequencer-synthesizers) plugged into a mixer 
with an effects loop (generally, Behringer’s copy of the Mackie 1202 board). The sources are 
fed into the effects loop using a send, where they are summed and processed by a high-gain 
distortion pedal (generally, a Diamond Fireburst) and fed back into a channel of the mixer. 
When multiple sources are fed through the loop simultaneously, these interact to produce 
unpredictable timbral variations, where the very high noise of the mixer and pedal blend 
variably with the incoming signals which get compressed and distorted by the distortion 
pedal. With rhythmically shifting and timbrally dense inputs, the system is particularly 
generative because the signals compete for bandwidth. As one signal quiets down, the other 
takes over, distorts, takes over the tonal spectrum, dies out, and trades again. I did not do 
this intentionally - rather, this behavior emerged from Karl feeding very loud signals from 
his laptop into our shared headphone monitoring, and me needing to push my own setup 
louder to be able to hear myself. As with Star Networks (which I performed for the first 
time in the early months of Passive Tones’ formation) I realized that the distorting behavior 



115 


Kabelsalat (Performance System) local physical, electrical and electromagnetic mediums 



Multiple Gain Stages (70-1 OOdb nonlinear gain) 


Figure 5.14: An adaptation of David Tudor’s Rainforest IV expanded to ex¬ 
plicitly include feedback loops as well as other resonating mediums 

than mechanical ones. 

of the 1202 mixer actually produces quite enjoyable textures. 

This was especially true when I let the channel in which the effects loop was plugged 
into feedback into itself. Although it’s possible to push the gain all the way to simple pure 
tones, there are, as in Star Networks “tipping points” before which the feedback acts more 
as a chaotic timbral control than a tone generator. I would even suggest that this particular 
setup, in addition to making signals “compete” with each other, turns the “tipping point” 
into an “instability zone” within which the results are not just erratic but also impermanent. 

5.7.2 Implementation and Iterations 

The current state of the system is resumed in the diagram included in figure 5.14. 

A recording of a performance done with this system is available here. 
















































116 


5.7.3 Interface 

Kabelsalat is an interesting system-piece to work on because it operates at various 
levels. It’s possible to use found components and resonators as part of feedback loops, like 
in Star Networks (although I began working on Kabelsalat before I knew of Ralph Jones); 
it’s sometimes productive to plug in commercial synthesizers, like my own Volca sample or 
a laptop; and there is also a lot to be done simply with the setup that’s explicitly defined in 
the diagram. 

In other words, this system allows interaction to happen at all levels of electronic in¬ 
struments: components, circuits, systems, interfaces. This is part of the appeal, but it also 
makes it difficult to discuss. For this reason I’ll pick a specific instance of its use, which 
fits well into my overall work on Kabelsalat. At a performance at Dartmouth College on 
10/18/2018, I had it setup with a laptop (used as a looping sampler), a Korg Volca sample 
(used as a drum machine), a Korg Volca Drums (used as another drum machine), a surface 
transducer on top of a table, a contact microphone, and a bass amplifier under the table. 
The goal of the performance was to use the feedback loop and the mixing distortion/com¬ 
pression effect it provides when so many sources and outputs are made to interact through 
it. Although I’m quite familiar with all of these individual systems, keeping track of all of 
them simultaneously while attempting to provide an arc to a performance for the audience 
is somewhat of an effort. In other words, the parts and the corresponding interfaces are 
predictable when taken piecemeal, but chaotic and surprising when taken as a whole. It 
is that gentle “user-unfriendliness” that makes Kabelsalat an explicitly post-optimal setup 
(Teboul 2018). 

5.7.4 Conclusion 

I realized that I liked to think of Kabelsalat as an expanded version of Rainforest IV 
where I was making various inputs fight for control of various electronic spaces (the mixer, 
the compressing distortion pedal). Reflecting on how to develop this, I worked towards 
making the system be capable of making itself more context-dependent. Based off an idea I 
had developed with composer and philosopher Sparkles Stanford for our piece Sonic Decay 
at the 2014 International Zizek Studies Conference, I knew that I could use conductive found 
materials like potatoes in combination with traditional electrical components with dramatic 
effect (Teboul and Stanford 2016). I also knew from a 2013 workshop with Nicolas Collins 


117 


I organized at Dartmouth College that I had a couple of inductive pickups that could form 
interesting feedback loops in the electromagnetic domain rather than the purely electrical 
domain. Combine this with a battery powered transducer 1 received as a gift in 2009 and 
had never known what do with, 1 realized that 1 could use my expanded Rainforest IV setup 
to make various resonating and oscillating bodies in my cultural toolkit (as generated by my 
synthesizers) and my environment (as enrolled by the transducer and a contact microphone 
or the inductive pickups) interact in various ways with metaphorical and literal consequences 
as a composition. 



CHAPTER 6 
Results 

6.1 Understanding works of handmade electronic music 

The principal result of this dissertation is a better understanding of the pieces studied 
and comparison points for future case-studies using a similar or comparable framework. 
Until now, studies of handmade electronic musics were adapted to the specific case - a 
behavior largely due to the complexity of individual projects worthy of academic scrutiny. 
By specifically selecting projects with pre-existing information, and using the framework to 
identify the missing data as well as occasionally completing these lacunae, I’ve shown how my 
framework was able to deal with a variety of real-life examples to the point where individual 
experiments, sometimes untouched for over 50 years, can be developed further. This is the 
primary contribution to this specific niche of music studies. 

For example, the unique position acquired by Steve Reich over the course of his early 
career and his weight in the experiments in art and technology community made him able to 
receive assistance from Bell Laboratories engineers. As such, the phase shifting pulse gate, 
a relatively simple device co-designed by experts in a golden era of electronics, probably 
does roughly what Steve Reich had in mind when he designed it with Owens and Flooke. 
This was exactly the downfall of the work as a whole: more than too mechanical, it was 
too premeditated and quickly abandoned. As far as we know, there were no attempts to 
modify or otherwise re-assess the potential of the machine. This shows not just what the 
relationship between technical decisions and musical consequences was, but also clarifies the 
attitude with the piece was approached. 

The discussion of Pygmy Gamelan showed that the bit shift registers were selected 
from surplus component stores, and thus reflect more directly the everyday of small-scale 
integrated circuit development and production of the time. In DeMarinis’ words, the circuit 
in which they are contextualized is meant to refer to an abstracted, “folkloric” use of the 
devices more than to their techno-capitalist reality. In this sense, the separation between 
application and reality of the supply chain allows for speculative uses of components, a form 
of techno-sonic science fiction. The analysis of the system allows us to understand exactly 
how the musical patterns of the machine were constructed, which clarifies the labor inherent 


118 



119 


in the work as well as the music implicit in the technology. 

A piece like Star Networks at the Singing Point show how the indeterminate scoring 
practice coming from Fluxus, minimalism, and the New York post-war avant garde engage 
with this potential music implicit in technology in a way which can permit for local and 
personal interpretations of electronics without losing character as a piece. Star Networks is 
notable in the sense that by using the circuit topology of the star network, it enables one 
or more performers to contribute something personal to a generative work. The circuits are 
too complex to be auditorily recognizable only from the sounds produced, but the work is 
not simply auditory: as Jones elaborates, there is an explicit theatricality in the laying bare 
of components, as if to show the musicality between components in addition to playing it 
(R. Jones 2010). 

This is reflected in my own projects. Kabelsalat uses the ability for feedback loops to 
produce sound based on the circuits included within the loop. It is a system of revealing 
through sound some of the physical properties and histories of the constituent materials 
used in itself. Stepper Choir is also meant to reveal something about the underlying logic 
of the mechanics of a 3D printer and of the slicing algorithm that connects 3D models 
to the 2D, offering the audience an opportunity to experience those technical assemblages 
in an artistic context to begin to intuit their programmatic logic. However, because they 
both use relatively complex underlying components and computer code, there is only so 
much an aesthetic experience can do to act as a shortcut for thorough technical analysis. 
Although I believe it is possible to develop an intuition for both of these systems, or, in 
theory of knowledge production terms, that is possible to develop embodied knowledge of 
the operation of these complex machines, I also think that any intuition developed is at best 
partial - some assignment of meaning is left to the imagination of the listener or reader. 

Pieces like Duplantis’ Downstream show how ad-hoc a completed code-instrument- 
game-system can be. In her own words: 

Everything developed experimentally. I was just trying to make a fun little 
accordion instrument and I hit upon this strange system and decided, no, I’m 
going to play around with this instead. It was all experimental. I think if there 
was anything that I brought to it beforehand that wasn’t experimental, it was 
just these general ideas of how the interaction would work. 


120 


And yet, because of the nature of the medium, Downstream is perhaps the most con¬ 
sistent system discussed in this dissertation. From my perspective, all the works discussed 
required a significant amount of investment before a single sound was heard - Duplantis’ 
work is an exception in the sense that all I had to do was download the folder she distributes 
as this release, look up how to run the gameboy emulator included therein, and the piece- 
instrument runs right away. Each iteration is different, because it is basically impossible to 
press the same keys at the same time as in the previous run, but the system consistently 
works and will work in the future- at least until part of the software chains breaks because 
of an update. Even the other digital works, such as the pure data patches or the 3D printer 
work, have too many moving parts that even simply misusing these systems might make 
them seem non-functional. 

6.2 Suggestions for documentation of future work 

Methodologically this dissertation also suggests some meta-narratives implicit in the 
documentation of handmade electronic music works specifically and electronic art works more 
generally. In showing how context and technics could be studied together, I demonstrated 
not only how they explained some musical structures but also served as a starting point to 
understand the work of works themselves: in other words, good documentation is, on top of 
one that accurately details a piece, its realization and potentially, its origins, also something 
that hints to the labor, trials, errors, experiments and other tribulations that went into 
previous iterations of that work. 

This is thoroughly indebted to Tudor’s and CIE’s practice of a living memory of live 
electronic music. Reiterating, or as I wrote, “covering,” a work adds in some sense new 
material to the record of its existence. In some cases it may be appropriate to overwrite 
past iterations, in others it may be important to consider the piece as a record even perhaps 
more than as a piece. For example, prior iterations of Reich’s Pendulum Music matter 
little because the work is so consistent, but each version of John Cage and Lejaren Hiller’s 
HPSCHD probably deserves to be documented better than any one of them have been so far. 
This active model of study is not particularly original: participant observation and critiques 
of external objectivity are well documented (Spradley 1980; Burawoy 1998). Nevertheless, 
the visual artist Rafael Lozano-Hemmer reminds us to doubt those who proclaim having a 
method for media art conservation: 


121 


Mistrust anyone who has a “method” for conservation of Media Art. Anyone, 
such as myself, who offers a set of rules is someone who is not considering the 
vast range of disparate experiences, methods, constraints and dependencies that 
can arise even within the work of a single artist. All we can do is suggest a bunch 
of tips, wait for an artist to prove those tips useless, and then review the tips. 
(Lozanno-Hemer 2017) 

Working from Lozano-Hemmer’s warning, my “tip” in this context, coming from exten¬ 
sive work in the understanding of past handmade electronic music works, is that although 
it is not always essential for the conservation of a work, understanding the labor that went 
into its existence until the time of examination is always an essential conversation to have 
with the work and its context. This is what 1 tried to do with the documentation provided 
for my own pieces. 

6.3 The necessity of mixed-method analysis 

Another result offered by the case-studies is that methods of analysis must be willing 
to follow the terms on which the artwork was made or risk reducing the work studied to a 
simplified projection of itself, as limited by the vocabulary and theoretical underpinning of 
that method. Continuing my use of the signal-processing terminology, the making process’ 
lossy characteristics are exacerbated when the discourse-making processes used to translate 
technical media into textual ones does not work with the characteristics of the artwork but 
against it. For example, a harmonic analysis of Pulse Music would do little to understand 
exactly what Reich was trying to do with the phase shifting pulse gate. The availability 
of original technical systems, access to the composer, and accounts of its iterations can 
illuminate the intentions and result of the composer and shape the scope of the analysis of 
the piece. 

In that sense, my discussion of Pulse Music extends Reich’s article because it attempts 
to understand exactly what in the piece was co-dependent on the phase shifting pulse gate, 
and what about the system was pre-designed by Reich. The study of the Pygmy Gamelan 
that follows extends Fred Turner’s discussion of the system because looking at the schematic 
shared by DeMarinis shows us not only how to make a copy, but making the copy and 
discussing it with DeMarinis reveals that there is a good reason why the filter values are not 


122 



Figure 6.1: An idealized representation of my practice-based research cycle. Not 
pictured are the various concurrent cycles happening simultaneously. 


notated: the device was in fact built multiple times. My analysis of Star Networks would 
be less valuable if it did not qualify Nicolas Collins’ brief description of it with nuances 
regarding Ralph Jones’ past and the robust way in which the instructions are assembled in 
order to facilitate performability. Discussing Downstream would be much less interesting if 
one were not allowed to look at the source code, since some of the underlying mechanics 
aren’t perceivable, at least until one knows what to look and listen for. 

More than anything it is this attitude of engaging with the personal, social and tech¬ 
nical conditions that enabled an artworks’ existence that I wished to communicate and 
instantiate in this dissertation. In doing so, I elaborated a mixed-method approach which I 
have summarized in figure 6.1. 

In practice, identifying connections between humans, sounds, and technologies leads to 
the unearthing of additional humans, artworks and technologies. Hence, a cyclical process, 
where studying, for example ,Pygmy Gamelan, leads to Paul DeMarinis’ other works, made 
me re-implement the design of his instrument into a digital, performable device, so that I 
could test my theoretical understanding of the device by comparing the output of my digital 
model versus that of the one included in the recording available online. Every study involves 
some sort of practical test, various levels of re-implementations, some of which I call "covers." 
In doing so, one Ends new materials to repeat the cycle with. It is important to also consider 
the overlap of research cycles, where one progression for one study might trigger a new cycle, 













123 


or feed into various stages of a past research projects. This diagram is a highly idealized 
schematic for the sake of discussion: it attempts to summarize my workflow where interviews 
and technical analysis are complementary because the record of actions that each electronic 
music instrument represents is partial, and benefits from being placed in relationship to what 
the maker remembers of the making processes, their intentions at the time, the resources 
they had access to, and so on. 

More generally, my study-specific results might be of interest to some select few, but 
more importantly, it is this attitude and method 1 have presented that I will try to propagate 
in future teaching and research. 



CHAPTER 7 
Conclusion 

7.1 We can make technical objects of study in music partially dis¬ 
cursive with two adapted tools: engineering analysis methods 
and artistic practice 

In this dissertation, I have suggested that if music studies wanted to engage with 
technical objects of study as discussed by You Nakai, it should look at the tactics leveraged 
in technical disciplines to study, understand and develop these objects of study. Of course, 
in a sense, these disciplines render these non-discursive objects of study discursive, reeling 
them back into the epistemological project of analysis. It is nevertheless noticeable that even 
once we can talk about circuits, subsystems, routines and topologies with words rather than 
circuits and circuit diagrams, these discussions require so much unique vocabulary that this 
discursivity bears the clear mark of its context. These words force the non-discursive nature 
of the medium into the text. 

More indirectly, artistic practice has also shown itself as a worthwhile mode of inves¬ 
tigation. This one does not need to be discourse-making to be valuable: poetic experiences, 
artistic works, technical systems and musical pieces do not need to be translated into dis¬ 
course at any point to be valued. Of course, in the context of handmade electronic music 
generally and this dissertation project specifically, these discursive and non-discursive ap¬ 
proaches coexist at various scales and times, sometimes side by side, sometimes one after 
the other. For example, it was not until I built a digital model of the phase shifting pulse 
gate that I wondered about the switching speed of the transistor Steve Reich used, but that 
was easily put into words. On the other hand, it remains difficult to explain exactly how 
satisfying it is to be listening to a Purr Data patch mimicking the behavior of the Pygmy 
Gamelan device down to the proper resonance. 

7.2 The lossy nature of handmade electronic music 

The previous chapter illustrates how various circumstantial parameters such as the level 
of documentation of an older work or access to authors at or after the time of creation can 


124 



125 


limit an understanding of an electronic music work in which a unique system was developed. 

I showed how historical studies and the inevitably partial information available required 
the development of hypotheses as to the range of past possibilities, resulting in speculative 
or “best guess” experiments which could then be tested against the partial documentation 
or increase the likelihood of a past actor to confirm that the analysis of the work was or 
was not accurate. In the case of Pygmy Gamelan, the schematic and the recording of the 
device proved to be all that was needed to develop a reasonable analog of the original system. 
This process unearthed an undocumented series of Pygmy Gamelan systems, all with unique 
variations more sophisticated than the modeled device. However, pending access to this 
second generation of electronics, DeMarinis and his audience remain those best able to 
discuss these. 

In contrast, work on Pulse Music proved generative: the imprecise documentation of 
the system and the absence of schematics or recordings as well as the lack of contact with 
Reich himself required educated guesses based on contextual cues and Reich’s prior writing. 
It revealed the absence of information regarding the whereabouts of the Bell Labs engineers, 
Larry Owens and Owen Flooke, who might have been able to provide information about 
the piece. Although Reich is still alive, no attempt to reach him through his publisher or 
agency have been successful, and therefore the device itself remains unaccounted for as well. 
Technical knowledge of both the artwork and the system gives us a sense of what might have 
been important to the author in their process of co-construction: this is a consequence of 
reverse engineering. This does not imply that models or new versions need to be in the spirit 
of the original, but rather that it is important, to possible extents, to assess how clearly 
that spirit remains. An analysis of the new iteration should exhibit an at least cursory 
understanding of which decisions are or are not within the original spirit of the project, 
if it is meaningful to ask such a question, or perhaps at least acknowledge that there is 
not enough information available to comment on what motivated a particular handmade 
electronic music experiment. 

Borrowing from digital signal processing parlance, I have referred to the making of 
electronic music works as a lossy process (Malazita, Teboul, and Rafeh 2020). Lossy origi¬ 
nally refers to compression processes in which a compromise in accuracy is made to facilitate 
transmission, usually by reducing the resulting hlesize. In other words, in lossy compression 
the compressed signal can only be partially recovered and experienced with artifacts. The 


126 


MP3 file format is notorious for being a lossy encoding format, based on the psychoacoustical 
limitations of the human ear (Sterne 2012). FLAC (free lossless audio codec) is a common 
lossless audio compression format: the hlesizes are smaller than the uncompressed WAV 
or AIFF formats but the original signal can still be recovered accurately (losslessly) before 
being played back. 

Because making an electronic music works involves crystallizing knowledge, resources 
and intentions in an artefact, set of artefacts, or events, and because these intentions, knowl¬ 
edge and resources can only partially be crystallized and recovered from electronic music 
works, 1 therefore extend the metaphor to discuss the genesis, lives and deaths of these 
works as lossy. This does not presuppose the existence of a canonical work - quite the con¬ 
trary, it accepts it as invented, as a fiction, or as a myth. Lossy, here, offers some friction 
with the signal processing context from which it is borrowed in the sense that it has nothing 
to do with quality as a metric and everything to do with quality as a sense of the poetics of 
the technosocial experience. This shift, from quantitative to qualitative, is one I discussed 
in earlier publication relating to the related concept of post-optimality (Teboul 2018). 

This loss is generative: as evidenced in the process of understanding the works of 
Reich and DeMarinis via digital recreations, the necessity to take decisions as to certain 
uncertainties resulted in potentially new variations on the original work. This generative 
aspect of practice-based research is neither new nor undocumented, however it seems helpful 
to name such a phenomena, at least in the context of electronic media art, to facilitate 
discussion. 

In some cases, interaction with the original authors or artefacts and documentation 
produced enables a partial recovery of lost information. For example, prior to this disserta¬ 
tion, the most level of detail available regarding the motivations underlying Pygmy Gamelan 
was a brief discussion in Turner’s chapter on the work, based on a quote by DeMarinis. 
Pending approval from DeMarinis, this document contains the most detailed explanation of 
DeMarinis’ ideas for this system and how they are materially realized in the various systems 
now revealed as all being Pygmy Gamelan devices. 

The extent to which each case study required experimentation with various forms of 
re-creation cannot be understated. More than in the spirit of my original meeting with 
surviving members of CIE who informally discussed how Tudor’s influence and legacy was 
best preserved via practice, the interactional expertise derived from developing electronic 


127 


music history as a performance or compositional project has offered insight that would be 
hard to obtain otherwise(Collins and Evans 2002). ft is hard to realize exactly how important 
the opening and closing times of each gate is important to the “phase shifting pulse gate” 
without attempting to build some version of it. ft is difficult to understand how rigid that 
same system is without trying to make it work beyond its self imposed grid. It is effectively 
impossible to understand what Star Networks at the Singing Point sounds like without 
hearing one of its iterations, and I also believe that it is difficult to understand exactly what 
performing it means without trying to actually play it. This is where the epistemological 
nature of the dissertation meets the boundaries of the traditional research project in the 
Anglo-Saxon university system. These types of interactional expertises, also discussed by 
Harry Collins under the term “embodied knowledge” (Collins and Kuscli 1998) or by DeLeon 
as a “cognitive biography of things” (de Leon 2016) are relatively meaningless in words - 
I can describe the process of tuning a feedback loop with random components as I did in 
my case study of Star Networks, but it would be much easier and valuable to demonstrate 
the process in person. In that sense, practices of live electronic music aren’t simply kept 
as practical traditions out of tradition: they self-correct to require practice to be properly 
“preserved.” 

Engagement with practitioners and practices enables us to situate audible musical 
structures on a spectrum from intentionally composed to experimentally obtained. En¬ 
gagement with technical media clarifies the mechanisms that transformed these intentional 
compositions and experiments into systems. This confirms the presuppositions outlined in 
this dissertation’s introduction and the premises on which this technical and humanistic 
work was undertaken. What was unexpected, however, was the extent to which participat¬ 
ing in or leading projects to recreate partially documented works of what I previously called 
“technologically dependent” electronic music works can contribute to endeavors of situating 
the role of experimentation in the original creation of those works. By staying relatively 
close to the material means of original authors’ ways of implementing their musical ideas, 
one can experience an analog to the questions, decisions and experiences that these orig¬ 
inal authors might have been face with. Therefore, this dissertation highlights, first, the 
importance of thorough material histories of electronic music works, and second, the value 
of experimentation in assessing these material histories. 


128 


7.3 Electronic music history and knowledge production 

There are a number of theoretical frameworks within which to assess the value of this 
experimentation. In their 2002 article discussing what they perceive as a “Third Wave of 
Science Studies” Harry Collins and Robert Evans offer a complementarity of “interactional” 
and “contributory” forms expertise to explain scientific research happening outside of the 
accredited laboratory (Collins and Evans 2002, 244). However, in the context of handmade 
electronic music, this model of science studies appears to lose a nuance of scale: because 
artistic practices are not institutionalized and legitimized using the same processes as sci¬ 
entific research, contributions and interactions can be much more local than in a system 
which has vested and explicit interests in rapidly documenting and propagating interactions 
and contributions in the forms of talks and publications through the medium of account¬ 
able outlets (conferences and journals). Electronic music, even in its most scientific and 
institutionalized forms, seems rife with concurrent and independent inventions: consider for 
example Moog and Buchla’s quasi simultaneous development of voltage controlled circuitry, 
or the various reiterations of delay as a musical effect as new technologies permit endless 
variations of the effect. 

In 1999 Harry Collins elaborated on Michael Polaniy’s concept of tacit knowledge in 
“The Shape of Actions,” co-authored with Martin Kusch (Collins and Kusch 1998): 

Case studies have shown that knowledge maintenance and transfer cannot be re¬ 
duced to matters of information (or habit) and have thus revealed that some no¬ 
tion like tacit knowledge—which works at the level of the collectivity—is needed 
if we are to understand the world. (89) 

This identifies the crucial component of “collectivity” which was not explicitly discussed 
in Collins and Evans’ discussion of interactional expertise. The Pygmy Gamelan or Phase 
Shifting Pulse Gate devices offer a number of material affordances and constraints: although 
these can be partially described through words (as I’ve attempted to do here) part of this 
dissertation leverages the unique advantages of practice-based research to understand exactly 
how the intent of the composer may have materialized between realizations of their musical 
works and the specifics of their electronic music instruments. 

Knowledge is contained in these electronic music systems in its realized forms. Devices 
such as the Phase Shifting Pulse Gate represent the compromise made at any given time by 


129 


these systems’ makers to achieve their goals. DeMarinis made all variations of the Pygmy 
Gamelan circuit based on what information and components he had available when he needed 
to complete these circuits. They display a sliver of the knowledge he had at the time. The 
concept of tacit knowledge is helpful in appreciating the myriad ways in which making is 
a lossy process of encoding: as this knowledge is reified, it becomes more or less difficult 
to assess whether or not something was developed through material experimentation or 
through prior knowledge. In other words, if looking at a circuit or code always tells the 
media archaeologist what was done, it is only occasionally possible to assess whether or not 
something was attempted as the result of tacit knowledge, complete chance, or knowledge 
that that thing would achieve the desired result. This is further complicated by the fact 
that, as evidenced in chapter five: case studies, this information is often forgotten by the 
very people possessing it shortly after it exists. 

Handmade electronic music is a worthwhile case study in the theory of knowledge 
production because it exemplifies in dramatic detail how artworks, material objects and 
creators are co-constructed. It isn’t simply that the composer creates a score, and learns 
from the score, it is that the maker is at any moment attempting to both develop a piece and 
the system for the piece. In science, the isolation of the expert is due to the relatively low 
number of experts in any given population; in handmade electronic music, the isolation of the 
practitioner is very much due to social, historical and technical contexts rather than technical 
ability. In that sense, as explained differently in the introduction, knowledge produced 
through handmade electronic music must be assessed in these three containers: artifacts, 
contexts, and humans. 

The implications for knowledge acquisition are profound: handmade electronic music 
may be neither a musical nor a technical practice, but rather simply be a third practice, 
somewhere between the other two rather than an intersection. Members of Composers In¬ 
side Electronics tended to come from European music backgrounds, but handmade electronic 
musicians today only occasionally come from or participate in western music communities. 
I suggest here that handmade electronic music can be a way for people interested in ex¬ 
perimentation with electronics and sound to engage in this combination of medium without 
having to necessarily acknowledge the history of either - regardless of how much it may be 
visible in the very media they will use to do so. 

Boundary work such as this is meaningful because it illustrates the process by which 



130 


fields of inquiry can leverage a process traditionally associated with the instrumentalization, 
or hierarchization of new disciplines by more established ones. 

7.4 Future Works 

This dissertation only includes the fragment of research necessary to describe and 
test the methodology of reverse engineering in musical co-production contexts that 1 have 
discussed above. In the process leading up to it I have begun case-studies and undertaken 
interviews for a number of handmade electronic music works not included in this write up 
because doing these justice required more time than was available. These include: 

• George Lewis’ work Voyager 

• David Tudor’s compositions with the Neural Synthesizer 

• MSHR’s compositions with Nestars 

• Philip White’s compositions with the Feedback Instrument 

• Charles Dodge’s composition Earth’s Magnetic Field 

• Pauline Oliveros’ work with the Expanded Instrument System and her homemade 

Buchla-type modular synthesizer 

Now that this mode of knowledge production and its implications have been clarified, 
the next step will be to apply it and refine it in those cases. It was designed to be general 
enough to allow productive comparisons across case studies - this will only be possible if a 
number of these studies continue to be implemented and collected side-by-side. 

7.5 Afterword 

Two days before defending this dissertation, one of my committee members, the media 
scholar and artist Michael Century, sent me something I did not think existed: a record¬ 
ing of one of Pulse Music’s original performances with the Phase Shifting Pulse Gate from 
May 27th, 1969 at the Whitney Museum (MinimalEffort 2019). This recording, posted to 
Youtube, was vaguely sourced as being from a French radio archive. It did overlay the par¬ 
tial recording of the piece (skipping the middle section) with various French commentary, 


131 


suggesting that it was broadcast in France shortly after the recording was made. The record¬ 
ing was posted a month after 1 had performed my version of Pulse Music and Pulse Music 
Variation at Arete in 2019. Since 1 had finished the project, 1 had stopped researching the 
topic, and did not notice the video until it was pointed out to me right before the defense. 
At that point, I was somewhat upset at myself for never having even known of the existence 
of this recording. 

Although I’m still in the process of attempting to reach out to the anonymous account 
which posted the hie, it does allow me to compare my speculative design, based on Reich’s 
article, with what seems to be a legitimate document. Further analysis could be done to 
determine a number of things from the device’s output (as I did in my study of the Pygmy 
Gamelan device), but a simple listening does indicate that my Purr Data system is fairly 
effective at recreating the musical and technical behaviors of the original. This is as close a 
confirmation of the validity of my approach, at least for this case study, as one could hope 
for. The only way to get more direct feedback regarding the quality of my interpretations 
would be to discuss it with author(s) themselves. 32 

Around the same time, as briefly discussed previously, Paul DeMarinis insisted that if 
I could reverse engineer the un-written values (in both sense of the word) in his schematic 
for specific instances of his systems, then perhaps he really never needed to write them down 
in the first place. Another one of my dissertation committee members, the composer and 
installation artist Nina Young, encouraged me to reformulate my stance regarding project 
documentation in light of this: I want to reiterate that I do not think this document requires 
or even encourages media artists to include anything specific as part of the documentation of 
their installations, compositions, or otherwise handmade works of electronic music. Rather, 
it points to the implications of making public specific types of documents. Share a schematic, 
and you will encourage a specific community to engage with your work. Share a recording, 
and another group of people might express interest. These interests will be directly shaped 
not just by the format of the documentation shared, but also that audience’s perception of 
that documentation and that format: this is an inherently relational process. 

To conclude, this relational process is of course political. Documenting, covering, 
re-iterating or expanding a work inevitably contributes, although this can be negative or 
positive, to canonizing it. In this sense the work of the handmade electronic musician, which 


32. Steve Reich’s agent has never responded to any of my inquiries. 


132 


as discussed above is always in some way historical, cannot be removed from its own context. 
As recently stated: 

I think that by carefully studying the histories of current day technologies, we 
can uncover insights into the constellation of human and technical arrangements 
that can help to projectively crystallize an understanding of the real nature of 
our current condition. This is based on my prejudice that cultures have long¬ 
standing currents of agenda—over hundreds of years and often unspoken—and 
that technologies, like the rest of material culture, are a reification of these agen¬ 
das. They are neither discoveries nor neutral. They come out of the dreams of 
people and offer indications of social relations. (DeMarinis, in Pritikin (2012)) 

Handmade electronic music stands as a fertile ground for intuitively coming to terms, 
on personal scales, with some aspects of this “real nature of our current condition” because it 
requires re-inventing so called “shared knowledge” into a locally functional version of itself, as 
materialized by electronic instruments. In engaging with the dreams of people, their roles in 
indicating social relations, musical technologies remind us that we are all cultivating activist 
lives. In this project, these lives happen to be (mostly) in sound. There, as Tara Rodgers 
reminds us: 

There can [...] be many approaches to cultivating an activist life in sound— 
many areas toward which we can direct our efforts—resulting in a proliferation 
of sonic-political acts that have local and far-reaching ripple effects. (Rodgers 
2015, 82) 

It is my hope that the present piece of writing, by documenting some of these possible 
approaches, will encourage future developments. As we watch the world burn, with riots 
following the seemingly endless hate against people of color, and indigenous people, and 
queer people, and non-binary people, between a pandemic and the coming climate collapse, 
commitments to justice in these future developments can not end on the page (if they ever 
go there at all). I believe everyone should have the chance to make that page, if it must 
exist, as it did for me, begin with what you wish you could be worried about instead of 
impending doom. In this case, in my case, it was music and the people and things that make 
it electrifyingly electric. But, more importantly, in learning from these people and things I 


133 


have come to realize that a just world is inevitable if we do not wish for a dead one. With 
or without listeners, this planet will sound ecstatic: may this document stand as a hopeful 
celebration of the future musics of our collective survival. 



134 


I can’t stand it, I know you planned it 
I’mma set it straight, this Watergate 
I can’t stand rockin’ when I’m in here 
’Cause your crystal ball ain’t so crystal clear 
So while you sit back and wonder why 
I got this fuckin’ thorn in my side 
Oh my god, it’s a mirage 
I’m tellin’ y’all, it’s sabotage 

So, so, so, so listen up, ’cause you can’t say nothin’ 

You’ll shut me down with a push of your button 

But, yo, I’m out and I’m gone 

I’ll tell you now, I keep it on and on 

’Cause what you see, you might not get 

And we can bet, so don’t you get souped yet 

Scheming on a thing, that’s a mirage 

I’m trying to tell you now, it’s sabotage 

Why 

Our backs are now against the wall 
Listen all y’all, it’s a sabotage 
Listen all y’all, it’s a sabotage 
Listen all y’all, it’s a sabotage 
Listen all y’all, it’s a sabotage 


Sabotage , Beastie Boys (1994) 



BIBLIOGRAPHY 


Akrich, Madeleine. 1987. “Comment Decrire Les Objets Techniques?” Techniques et culture, 
no. 9: 49-64. 

-. 1992. “The De-Scription of Technical Objects.” In In Shaping Technology/Building 

Society: Studies in Sociotechnical Change, edited by Wiebe Bijker and John Law, 205- 
224. Cambridge: MIT Press. 

Anderson, Benedict. 2006. Imagined Communities: Reflections on the Origin and Spread of 
Nationalism. London: Verso Books. 

Anderton, Craig. 1980. Electronic Projects for Musicians. Logan: Amsco. 

Armitage, Jack, Fabio Morreale, and Andrew McPherson. 2017. The Finer the Musi¬ 
cian, the Smaller the Details": NIMEcraft under the Microscope.” In Proceedings of the 
International Conferences on New Interfaces for Musical Expression. 

Bakan, Michael B, Wanda Bryant, Guangming Li, David Martinelli, and Kathryn Vaughn. 
1990. “Demystifying and Classifying Electronic Music Instruments.” Selected reports in 
ethnomusicology 8:37-63. 

Becker, Howard Saul. 1982. Art Worlds. Berkeley: University of California Press. 

Beirer, Ingrid, ed. 2011. Paul DeMarinis: Buried In Noise. Heidelberg: Kehrer Verlag. 

Belevitch, Vitold. 1962. “Summary of the History of Circuit Theory.” Proceedings of the 
Institute of Radio Engineers 50 (5): 848-855. 

Bell, Eamonn Patrick. 2019. “The Computational Attitude in Music Theory.” PhD diss., 
Columbia University. doi:10.7916/d8-yfr2-k514. 

Biggerstaff, Ted J. 1989. “Design Recovery for Maintenance and Reuse.” Computer 22, no. 
7 (July): 36-49. doi:10. 1109/2.30731. 

Bijker, Wiebe E. 1995. “Sociohistorical Technology Studies.” In Handbook of Science and 
Technology Studies, Revised Edition, 229-256. Thousand Oaks: SAGE Publications. 


135 



136 


Bijker, Wiebe E., Thomas P. Hughes, Trevor Pinch, and Deborah G. Douglas, eds. 2012. 
The Social Construction of Technological Systems: New Directions in the Sociology and 
History of Technology. Second Edition. Cambridge: MIT Press. 

Bijker, Wiebe E., and John Law, eds. 1992. Shaping Technology/Building Society: Studies in 
Sociotechnical Change. Cambridge: MIT Press. 

Bissell, Chris. 2004. “Models and "Black Boxes": Mathematics as an Enabling Technology in 
the History of Communications and Control Engineering.” Revue D ’histoire des Sciences 
57 (2): 305-338. doi:10 .3406/rhs.2004.2215. 

Blender Foundation. 2020. Blender. Org - Home of the Blender Project - Free and Open 3D 
Creation Software. Accessed April 28. https://www.blender.org/. 

Boeva, Yana, Devon Elliot, Edward Jones-Imhotep, Shezan Muhammedi, and William J. 
Turkel. 2018. “Doing History By Reverse Engineering Electronic Devices.” In Making 
Things And Drawing Boundaries: Experiments In The Digital Humanities, edited by 
Jentery Sayers, 163-176. Minneapolis: University of Minnesota Press. 

Borgdorff, Henk. 2012. The Conflict of the Faculties: Perspectives on Artistic Research and 
Academia. Amsterdam: Leiden University Press. 

Born, Georgina. 1995. Rationalizing Culture: IRCAM, Boulez, and the Institutionalization 
of the Musical Avant-Garde. Berkeley: University of California Press. 

-. 1999. “Computer Software as a Medium: Textuality, Orality and Sociality in an 

Artificial Intelligence Research Culture.” In Rethinking Visual Anthropology, 139-69. 
New Haven: Yale University Press. 

Born, Georgina, and Alex Wilkie. 2015. “Temporalities, Aesthetics and the Studio: An Inter¬ 
view with Georgina Born.” In Studio Studies: Operations, Topologies & Displacements, 
139-156. New York: Routledge. 

Boylestad, Robert L. 2003. Introductory Circuit Analysis. 10th Edition. Upper Saddle River: 


Prentice Hall. 



137 


Bukvic, Ivica Ico, Albert Graf, and Jonathan Wilkes. 2016. “Latest Developments with Pd- 
L20rk and Its Development Branch Purr-Data. 1 ' In Proceedings of the 5th International 
Pure Data Convention. New York. 

-. 2017. “Meet the Cat: Pd-L20rk and Its New Cross-Platform Version ‘Purr Data’.” 

In Proceedings of the Linux Audio Conference. Saint-Etienne, France. 

Burawoy, Michael. 1998. “The Extended Case Method.” Sociological theory 16 (1): 4-33. 

Burhans, Ralph W. 1973. “Simple Bandpass Filters.” Journal of the Audio Engineering So¬ 
ciety 21 (4): 275-277. 

Candy, Linda. 2006. Practice Based Research: A Guide. Technical report. Sydney: Creativity 
& Cognition Studios, University of Sydney, https : //www. creativityandcognition. 
com/wp-content/uploads/2011/04/PBR-Guide-1.1-2006.pdf. 

Carlin, H. 1964. “Singular Network Elements.” IEEE Transactions on Circuit Theory 11, no. 
1 (March): 67-72. doi:10. 1109/TCT. 1964.1082264. 

Catsoulis, John. 2005. Designing Embedded Hardware. Sebastopol: O’Reilly. 

Charles Eppley, Sam Hart. 2016. Circuit Scores: Electronics After David Tudor, http: // 
avant.org/event/circuit-scores/. 

Chechile, Alex, and Suzanne Thorpe. 2017. “Live Performance Considerations for Pauline 
Oliveros’ Early Electronic Music.” In Improvisation and Listening: Conference in Mem¬ 
ory of Pauline Oliveros. Montreal, June. 

Chiba, Shun-Ichi. 1997. Interview with Paul DeMarinis. Accessed May 27, 2020. http : 
//www.lovely.com/press/articles/interview.demarinis.japan.html. 

Chikofsky, E. J., and J. H. Cross. 1990. “Reverse Engineering and Design Recovery: A 
Taxonomy.” IEEE Software 7, no. 1 (January): 13-17. doi:10. 1109/52.43044. 

Choi, Taeyoon. 2016. Circuit Scores: Electronics After David Tudor 3.27. http://blog, 
sfpc.io/post/141215915331/circuit-scores-electronics-after-david-tudor- 
327. 

Chun, Wendy Hui Kyong. 2011. Programmed Visions: Software and Memory. Cambridge: 
MIT Press. 



138 


Cluett, Seth Allen. 2013. “Loud Speaker: Towards a Component Theory of Media Sound.” 
PhD diss., Princeton University. 

Collins, Harry, and Robert Evans. 2002. “The Third Wave of Science Studies: Studies of 
Expertise and Experience.” Social Studies of Science 32 (2): 235-296. 

Collins, Harry, and Martin Kusch. 1998. The Shape of Actions: What Humans and Machines 
Can Do. Cambridge, Mass: MIT Press. 

Collins, Nicolas. 2006. Handmade Electronic Music: The Art of Hardware Hacking. New 
York: Routledge. 

-. 2007. “Live Electronic Music.” In The Cambridge Companion To Electronic Music, 

38-54. Cambridge: Cambridge University Press. 

-. 2009. Handmade Electronic Music: The Art of Hardware Hacking. Second. New York 

and London: Routledge. 

Cook, Nicholas. 2013. Beyond the Score: Music as Performance. New York: Oxford University 
Press. 

Darlington, Sidney. 1953. Semiconductor Signal Translating Device Patent #2,663,806, hied 
December 1953. 

-. 1999. “A History of Network Synthesis and Filter Theory for Circuits Composed of 

Resistors, Inductors, and Capacitors.” IEEE Transactions on Circuits and Systems 46 

( 1 ): 10 . 

de Leon, David. 2016. “The Cognitive Biographies of Things.” In Doing Things with Things, 
123-140. New York: Routledge. 

DeCarlo, Raymond A., and Pen-Min Lin. 2009. Linear Circuits: Time Domain, Phasor and 
Laplace Transform Approaches. Dubuque: Kendall Hunt Publishing. 

DeLanda, Manuel. 1997. A Thousand Years of Nonlinear History. New York: Zone Books. 

-. 2016. Assemblage Theory. Edinburgh: Edinburgh University Press. 

DeMarinis, Paul. 1975. “Pygmy Gamelan.” Asterisk: A Journal of New Music 1, no. 2 (May): 


46-48. 






139 


DeMarinis, Paul. 2016a. Circuit Schematics. http://pauldemarinis.org/Circuits.html. 

-. 2016b. Pygmy Gamelan. http://pauldemarinis.org/PygmyGamelan.html. 

Dewar, Andrew Raffo. 2009. “Handmade Sounds: The Sonic Arts Union and American Tech- 
noculture.” PhD diss., Wesleyan University. 

Diduck, Ryan. 2018. Mad Skills: Music and Technology Across the Twentieth Century. 
Watkins Media Limited. 

Digbee. 2019. Cyber Folk: Digbee’s Electronic Chronicle. Rutherford, NJ: Harpy Gallery. 

Dolan, Emily. 2012. “Toward a Musicology of Interfaces.” Keyboard Perspectives 5:1-12. 

-, ed. 2013. “Critical Organology.” In American Musicological Society Annual Meeting. 

Pittsburgh. 

Dorfling, Christina. 2019. “The Oscillating Circuit: The Resonating Provenance of Electronic 
Media.” PhD diss., University of Arts Berlin. 

Doring, Sebastian, and Jan-Peter E.R Sonntag. 2019. “Apparatus Operandil: Anatomy//Friedrich 
A Kittler’s Synthesizer.” In Rauschen, 109-146. Leipzig: Merve. 

Dummer, Geoffrey William Arnold. 2013. Electronic Inventions and Discoveries: Electronics 
from Its Earliest Beginnings to the Present Day. New York: Elsevier. 

Dunne, Anthony. 2005. Hertzian Tales: Electronic Products, Aesthetic Experience, and Crit¬ 
ical Design. Cambridge: MIT Press. 

Eisler, Matthew N. 2017. “Exploding the Black Box: Personal Computing, the Notebook 
Battery Crisis, and Postindustrial Systems Thinking.” Technology and Culture 58, no. 

2 (June): 368-391. doi:10 .1353/tech. 2017.0040. 

Emerson, Lori. 2014. Reading Writing Interfaces: From the Digital to the Bookbound. Min¬ 
neapolis: University of Minnesota Press. 

Ernst, Wolfgang. 2016a. Chronopoetics: The Temporal Being and Operativity of Technological 
Media. London ; New York: Rowman & Littlefield International. 

-. 2016b. Sonic Time Machines: Explicit Sound, Sirenic Voices, and Implicit Sonicity. 

Amsterdam: Amsterdam University Press. 





140 


Evens, Aden. 2005. Sound Ideas: Music, Machines, and Experience. Minneapolis: University 
of Minnesota Press. 

Farbstein, Rebecca. 2011. “A Critical Reassessment of Pavlovian Art and Society, Using 
Chaine Operatoire Method and Theory.” Current Anthropology 52 (3): 401-432. doi:10. 
1086/660057. 

Feld, Steven. 2015. “Acoustemology.” In Keywords in Sound, 12-21. Durham, NC: Duke 
University Press. 

Fischer, Ernst. 1963. The Necessity of Art: A Marxist Approach. Translated by Anna Bostock. 
Baltimore: Penguin Books. 

Franco, Sergio. 1974. “Hardware Design of a Real-Time Musical System.” PhD diss., Uni¬ 
versity of Illinois. 

GBDK. 2020. GBDK. Accessed April 29. http://gbdk.sourceforge.net/. 

Ghazala, Qubais Reed. 2004. “The Folk Music of Chance Electronics: Circuit-Bending the 
Modern Coconut.” Leonardo Music Journal 14:97-104. 

-. 2005. Circuit-Bending: Build Your Own Alien Instruments. New York: Wiley Pub¬ 
lishing. 

Gieryn, Thomas F. 1983. “Boundary-Work and the Demarcation of Science from Non- 
Science: Strains and Interests in Professional Ideologies of Scientists.” American So¬ 
ciological Review 48 (6): 781-795. doi: 10.2307/2095325. 

Gnoli, Claudio. 2006. “Phylogenetic Classification.” Knowledge Organization 33 (3): 138-152. 

Gordon, Theodore Barker. 2018. “Bay Area Experimentalism: Music and Technology in the 
Long 1960s.” PhD diss., University of Chicago. 

Gottschalk, Jennie. 2016. Experimental Music Since 1970. New York: Bloomsbury Publishing 
USA. 

Granzow, John. 2020. CV. Accessed April 5, 2020. https://ccrma.stanford.edu/~granz 


ow/cv/index.html. 



141 


Grimshaw, Jeremy. 2019. Pulse Music, for Phase Shifting Pulse Gate. Accessed October 16. 

https : / / www . allmusic . com/composition/pulse - music - for - phase - shif ting- 
pulse-gat e-mc0002501034. 

Haddad, Donald Derek, Xiao Xiao, Tod Machover, and Joe Paradiso. 2017. “Fragile Instru¬ 
ments: Constructing Destructible Musical Interfaces.’' In Proceedings of the International 
Conference on New Interfaces for Musical Expression, 30-33. Copenhagen. 

Hall, Stuart. 1996. “Race, Articulation, and Societies Structured in Dominance.” In Black 
British Cultural Studies: A Reader, edited by Houston A. Baker, Manthia Diawara, and 
Ruth Lindeborg, 16-60. Chicago: University of Chicago Press. 

Hamscher, Walter, and Randall Davis. 1984. “Diagnosing Circuits With State: An Inherently 
Underconstrallied Problem.” In Conference of the Association for the Advancement of 
Artificial Technology Proceedings, 142-147. University of Texas at Austin. 

Henriques, Julian. 2011. Sonic Bodies: Reggae Sound Systems, Performance Techniques, and 
Ways of Knowing. New York: Continuum. 

Herrington, Donald E. 1986. How to Read Schematics. Indianapolis: H.W. Sams. 

Hertz, Garnet, and Jussi Parikka. 2012. “Zombie Media: Circuit Bending Media Archaeology 
into an Art Method.” Leonardo 45 (5): 424-430. 

Heying, Madison. 2019. “A Complex and Interactive Network: Carla Scaletti, the Kyma 
System, and the Kyma User Community.” PhD diss., UC Santa Cruz. 

Hitchins, Ray. 2016. Vibe Merchants: The Sound Creators of Jamaican Popular Music. New 
York: Routledge. 

Ho, Chung-Wen, A. Ruehli, and P. Brennan. 1975. “The Modified Nodal Approach to Net¬ 
work Analysis.” IEEE Transactions on Circuits and Systems 22, no. 6 (June): 504-509. 
doi: 10.1109/TCS. 1975.1084079. 

Holmes, Thomas. 2004. Electronic and Experimental Music Pioneers in Technology and Com¬ 
position. New York: Taylor & Francis. 

Holzaepfel, John. 1994. “David Tudor and the Performance of American Experimental Music, 
1950-1959.” PhD diss., City University of New York. 


142 


Horowitz, Paul. 2015. The Art of Electronics. Third Edition. New York: Cambridge Univer¬ 
sity Press. 

Hsu, Hansen. 2007. “Opening the ‘Black Box’ of the ‘Black Box’: The Metaphor of the ‘Black 
Box’ and Its Use in STS.” 

Hughes, Thomas. 1987. “The Evolution of Large Technological Systems.” In The Social Con¬ 
struction of Technological Systems: New Directions in the Sociology and History of Tech¬ 
nology, 51-82. Cambridge: MIT Press. 

Huhtamo, Erkki, and Jussi Parikka. 2011. “Introduction: An Archaeology of Media Archae¬ 
ology.” Media Archaeology: Approaches, Applications, and Implications: 1-21. 

Institute of Electrical and Electronics Engineers, ed. 2000. IEEE 100: The Authoritative Dic¬ 
tionary of IEEE Standards Terms. 7th ed. New York: Standards Information Network, 
IEEE Press. 

Johnson, J. Richard. 1994. Schematic Diagrams: The Basics of Interpretation and Use. 
Clifton Park: Delmar Cengage Learning. 

Jones, Hedley. 2010. “The Jones High Fidelity Audio Power Amplifier of 1947.” Caribbean 
Quarterly 56 (4): 97-107. 

Jones, Ralph. 1978. “Star Networks at the Singing Point (Performance Instructions).” http: 
//ubumexico.centro.org.mx/text/emr/interviews_media/inside_electronics/ 
jones/star-networks_perf-instructions.pdf. 

-. 1979. “Star Networks at the Singing Point (Notes).” http://ubumexico . centro . 

org.mx/text/emr/interviews_media/inside_electronics/jones/star-networks_ 
notes.pdf. 

-. 2004. “Composer’s Notebook: Star Networks at the Singing Point.” Leonardo Music 

Journal 14 (1): 81-82. 

-. 2010. Interview with Ralph Jones and Matt Wellins. http://www.ubu.com/emr/ 


interviews.html. 





143 


Jones-Imhotep, Edward, and William J. Turkel. 2019. “The Analog Archive.” In Seeing the 
Past with Computers: Experiments with Augmented Reality and Computer Vision for 
History, 95-115. Ann Arbor: University of Michigan Press. 

Kant, David. 2019. “Machine Listening as a Generative Model: Happy Valley Band.” PhD 
diss., UC Santa Cruz. 

Kartomi, Margaret. 2001. “The Classification of Musical Instruments: Changing Trends in 
Research from the Late Nineteenth Century, with Special Reference to the 1990s.” Eth- 
nomusicology 45 (2): 283-314. 

Kelly, Caleb. 2009. Cracked Media: The Sound of Malfunction. Cambridge: MIT Press. 

Kerman, Joseph. 1982. The Beethoven Quartets. Westport: Greenwood Press. 

Kielkowski, Ron. 1998. Inside SPICE. London: McGraw-Hill. 

Kittler, Friedrich A. 1999. Gramophone, Film, Typewriter. Stanford, Calif: Stanford Univer¬ 
sity Press. 

Knorr-Cetina, K. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, 
Mass: Harvard University Press. 

Lavengood, Megan. 2019. “What Makes It Sound ’80s?: The Yamaha DX7 Electric Piano 
Sound.” Journal of Popular Music Studies 31, no. 3 (September): 73-94. doi:10.1525/ 
jpms.2019.313009. 

Lewis, George E. 1999. “Interacting with Latter-Day Musical Automata.” Contemporary 
Music Review 18 (3): 99-112. 

-. 2000. “Too Many Notes: Computers, Complexity and Culture in Voyager.” Leonardo 

Music Journal 10 (December): 33-39. doi:10. 1162/096112100570585. 

-. 2007. “Mobilitas Animi: Improvising Technologies, Intending Chance.” Parallax 13 

(4): 108-122. 

-. 2009. “Interactivity and Improvisation.” In The Oxford Handbook of Computer Mu¬ 
sic, edited by Roger Dean. New York: Oxford University Press. 





144 


Lewis, George E. 2017. “From Network Bands to Ubiquitous Computing: Rich Gold and 
the Social Aesthetics of Interactivity. 1 ' In Improvisation and Social Aesthetics, edited by 
Georgina Born and Will Straw. Durham: Duke University Press. 

-. 2018. “Why Do We Want Our Computers to Improvise?” In The Oxford Handbook 

of Algorithmic Music, edited by Roger Dean and Alex McLean, 123-130. New York: 
Oxford University Press. 

Li, Guangming. 2006. “The Effect of Inharmonic and Harmonic Spectra in Javanese Game- 
lan Tuning (1): A Theory of the Slendro.” In Proceedings of the 7th WSEAS Interna¬ 
tional Conference on Acoustics & Music: Theory & Applications, 65-71. Cavtat, Croatia: 
World Scientific and Engineering Academy and Society (WSEAS), June. 

Lincoln, Harry B. 1970. The Computer and Music. Cornell University Press Ithaca, NY. 

Loughridge, Deirdre. 2013. “From Bone Flute to Auto-Tune: On the Long History of Music 
and Technology.” In Conference of the American Musicological Society. Pittsburgh. 

Lovely Music. 1981. Lovely Little Records (Liner Notes), http://www.lovely.com/albumn 
otes/notes0106.html. 

Lozanno-Hemer, Rafael. 2017. Best Practices for Conservation of Media Art from an Artist’s 
Perspective. Accessed February 23, 2019. https://github.com/antimodular/Best- 
practices-for-conservation-of-media-art. 

Lucier, Alvin. 1998. “Origins of a Form: Acoustical Exploration, Science and Incessancy.” 
Leonardo Music Journal 8:5-11. 

Lysloff, Rene T. A., and Leslie C. Gay, eds. 2003. Music and Technoculture. Middletown, 
Conn: Wesleyan University Press. 

Magnusson, Thor. 2009. “Epistemic Tools: The Phenomenology of Digital Musical Instru¬ 
ments.” PhD diss., University of Sussex. 

-. 2017. “Musical Organics: A Heterarchical Approach to Digital Organology.” Journal 

of New Music Research 46 (3): 286-303. 




145 


Mahillon, Victor-Charles. 1893. Catalogue Descriptif Et Analytique Du Musee Instrumental 
(Historique Et Technique) Du Conservatoire Royal De Musique De Bruxelles. Bruxelles: 
A. Hoste. 

Malazita, James, Ezra J. Teboul, and Hined Rafeh. 2020. “Digital Humanities as Epistemic 
Cultures: How DH Labs Make Knowledge, Objects, and Subjects.” Digital Humanities 
Quarterly 14 (1). 

Marino, Mark C. 2020. Critical Code Studies. Cambridge: MIT Press. 

Marsh, Allison. 2019. The Factory: A Social History of Work and Technology. Santa Barbara, 
California: Greenwood, An imprint ABC-CLIO, LLC. 

Marshall, Owen. 2014. “Synesthetizing Sound Studies and the Sociology of Technology.” 
Sociology Compass 8 (7): 948-958. 

Martinelli, G. 1965. “On the Nullor.” Proceedings of the IEEE 53, no. 3 (March): 332-332. 
doi: 10.1109/PR0C. 1965.3733. 

Marx, Karl, Friedrich Engels, and Jodi Dean. 2017. The Communist Manifesto. London: 
Pluto Press. 

Mattern, Shannon. 2017. Code + Clay... Data + Dirt: Five Thousand Years of Urban Media. 
Minneapolis ; London: University of Minnesota Press. 

McAlpine, Kenneth B. 2018. Bits and Pieces: A History of Chiptunes. New York: Oxford 
University Press. 

McKittrick, Katherine, and Alexander G. Weheliye. 2017. “808s and Heartbreak.” Propter 
Nos 2 (1): 13-42. 

Miller, Brian. 2020. “Enminded, Embodied, Embedded: The Concept of Musical Style from 
Leonard Meyer to Machine Learning.” PhD diss., Yale University. 

Mills, Mara. 2012. “Media and Prosthesis: The Vocoder, the Artificial Larynx, and the His¬ 
tory of Signal Processing.” Qui Parle 21, no. 1 (December): 107-149. doi: 10.5250/ 
quiparle.21.1.0107. 

MinimalEffort. 2019. Steve Reich - Excerpt from ’Pulse Music’ (1969), June. Accessed 
May 30, 2020. https://www.youtube.com/watch?v=XVSNV9EGB3s. 


146 


Minkowsky, John. 1980. “Interview with Ralph Jones.” http: //ubumexico. centro. org. mx/ 
text/emr/interviews_media/inside_electronics/j ones/j onesinterview.pdf. 

Mor, Bhavya, Sunita Garhwal, and Ajay Kumar. 2019. “A Systematic Literature Review 
on Computational Musicology.” Archives of Computational Methods in Engineering 26 
(April): 1-15. doi:10. 1007/sll831-019-09337-9. 

Morgan, Frances. 2017. The Ems Synthi 100: Dialogues Between Invention, Preservation 
and Restoration, July. Accessed April 26, 2020. https://fylkingen.se/tongues/ 
index.php/may-tongues_/the-ems-synthi-100-dialogues-between-invention- 
preservation-and-restoration/. 

mrpeach. 2020. CDf09f.c Source Code. Accessed January 3. http: //svn. code . sf .net/p/ 
pure-data/svn/trunk/externals/mrpeach/cmos/cd4094.c. 

Mumma, Gordon. 1974. “Witchcraft, Cybersonics, Folkloric Virtuosity.” In Darmstadter 
Beitrage Zur Neue Musik, Ferienkurse ’74, 14:71-77. Musikverlag Schott. 

Nagel, L., and R. Rohrer. 1971. “Computer Analysis of Nonlinear Circuits, Excluding Ra¬ 
diation (CANCER).” IEEE Journal of Solid-State Circuits 6, no. 4 (August): 166-182. 
doi:10. 1109/JSSC. 1971.1050166. 

Nakai, You. 2016. “On the Instrumental Natures of David Tudor’s Music.” PhD diss., New 
York University. 

-. 2017a. “David Tudor and The Occult Passage of Music.” In Talk given at the Meeting 

of the Anthroposophical Society of Norway. November. Accessed February 13, 2019. 
https://www.academia.edu/35233758/David_Tudor_and_The_0ccult_Passage_of_ 
Music. 

-. 2017b. “Inside-Out: David Tudor’s Composition of the Pepsi Pavillion as a Musical 

Instrument.” Journal of the American Musical Instrument Society 43:175-202. 

Nishizawa, Ryue. 2010. “Conversation between R.yue Nishizawa and Sou Fujimoto.” El Cro- 
quis, no. 151: 5-19. 

Novak, David. 2013. Japanoise: Music at the Edge of Circulation. Durham: Duke LIniversity 


Press. 




147 


Orman, Jack. 2016. Is It Okay to Clone Electronics? Accessed September 28, 2019. http: 
//www.muzique.com/clones.htm. 

Oudshoorn, Nelly E.J., and Trevor Pinch. 2003. How Users Matter: The Co-Construction of 
Users and Technologies. Cambridge: MIT Press. 

Ouzounian, Gascia. 2010. “An Interview with Paul DeMarinis.” Computer Music Journal 34 
(4): 10-21. 

Parikka, Jussi. 2011. “Operative Media Archaeology: Wolfgang Ernst’s Materialist Media 
Diagrammatics.” Theory, Culture & Society 28, no. 5 (September): 52-74. doi:10. 1177/ 
0263276411411496. 

-. 2012. What Is Media Archaeology? Cambridge: Polity Press. 

Patteson, Thomas. 2016. Instruments for New Music: Sound, Technology, and Modernism. 
Berkeley: University of California Press. 

Perner-Wilson, Hannah. 2011. “A Kit-of-No-Parts.” Master’s Thesis, Massachusetts Institute 
of Technology. 

-. 2015. Traces With Origin. Accessed October 14, 2019. https://www.plusea.at/ 

?category_name=traces-with-origin. 

Pfender, Joe. 2019. “Oblique Music: American Tape Experimentalism and Peripheral Cul¬ 
tures of Technology, 1887 and 1950.” PhD diss., New York University. 

Pinch, Trevor. 2016. ““Bring on Sector Two!” The Sounds of Bent and Broken Circuits.” 
Sound Studies 2 (1): 36-51. 

-. 2019. “From Technology Studies to Sound Studies: How Materiality Matters.” Epis¬ 
temology & Philosophy of Science 56 (3): 123-137. 

Pinch, Trevor, and Wiebe E. Bijker. 1984. “The Social Construction of Facts and Artefacts: 
Or How the Sociology of Science and the Sociology of Technology Might Benefit Each 
Other.” Social Studies of Science 14, no. 3 (August): 399-441. doi:10 .1177/030631284 
014003004. 

Pinch, Trevor, and Harry Collins. 2014. Golem at Large: What You Should Know about 
Technology. Cambridge: Cambridge University Press. 





148 


Pinch, Trevor, and Frank Trocco. 2004. Analog Days: The Invention and Impact of the Moog 
Synthesizer. Cambridge: Harvard University Press. 

Pritikin, Renny. 2012. Interview with Paul DeMarinis. Text. Accessed May 26, 2020. https : 
//www.artpractical.com/feature/interview_with_paul_demarinis/. 

Provenzano, Catherine. 2019. “Emotional Signals: Digital Tuning Software and the Meanings 
of Pop Music Voices.” PhD diss., New York University. Accessed April 26, 2020. http: // 
search.proquest.com/pqdtglobal/docview/2273314056/6BE136BF4BC64A99PQ/1 . 

Puckette, Miller. 1996. “Pure Data: Another Integrated Computer Music Environment.” In 
Proceedings of the Second Intercollege Computer Music Concerts, 37-41. Tachikawa. 

-. 2001. “New Public-Domain Realizations of Standard Pieces for Instruments and Live 

Electronics.” In Proceedings of the International Conference on Computer Music, 4. 

Reich, Steve. 1972. “An End to Electronics—Pulse Music, the Phase Shifting Pulse Gate, 
and Four Organs, 1968-1970.” Edited by Alvin Lucier. Source Magazine 10 (February). 

Reich, Steve, and Paul Hillier. 2004. Writings on Music 1965-2000. Oxford University Press, 
October. doi:10 .1093/acprof : oso/9780195151152.001.0001. 

Rekoff, M. G. 1985. “On Reverse Engineering.” IEEE Transactions on Systems, Man, and 
Cybernetics 15, no. 2 (March): 244-252. doi:10 .1109/TSMC. 1985.6313354. 

-. 1989. “Teaching Circuit Analysis from a Problem Solving Perspective.” In Proceed¬ 
ings of the IEEE Energy and Information Technologies in the Southeast’ Conference, 
1398-1403 vol.3. Columbia, SC, April. doi:10 .1109/SEC0N. 1989.132652. 

Repetier. 2020. https://www.repetier.com/. Accessed April 28. https ://www . repetier . 
com/. 

ReplicatorG. 2020. ReplicatorG Is a Simple, Open Source 3D Printing Program. Accessed 
April 28. http://replicat.org/. 

Rheinberger, Hans-J5rg. 1994. “Experimental Systems: Historiality, Narration, and Decon¬ 
struction.” Science in Context 7 (1): 65-81. 

Robbins, Allan H., and Wilhelm C. Miller. 2012. Circuit Analysis: Theory and Practice. 
Clifton Park: Cengage Learning. 




149 


Rodgers, Tara. 2015. “Cultivating Activist Lives in Sound.” Leonardo Music Journal 25:79- 
83. 

Rogalsky, Matthew. 2006. “Idea and Community: The Growth of David Tudor’s Rainforest, 
1965-2006.” PhD diss., City University London. 

Rovan, Joseph "Butch". 2009. “Living on the Edge: Alternate Controllers and the Obstinate 
Interface.” In Mapping Landscapes for Performance as Research: Scholarly Acts and 
Creative Cartographies, edited by Shannon Rose Riley and Lynette Hunter, 252-259. 
London: Palgrave Macmillan. 

Schill, Angela, and Douglas Lynner, eds. 1976. “Composer/Performer.” Synapse Magazine 1 
(4): 6. 

Siegert, Bernhard. 2013. “Cultural Techniques: Or the End of the Intellectual Postwar Era 
in German Media Theory.” Theory, Culture & Society 30, no. 6 (November): 48-65. 
doi:10. 1177/0263276413488963. 

Spradley, James P. 1980. “Ethnography and Culture; Ethnography for What?” In Participant 
Observation. New York: Holt, Rinehart and Winston. 

Sterne, Jonathan. 2012. MP3: The Meaning of a Format. Durham: Duke University Press. 

Sterne, Jonathan, and Tara Rodgers. 2011. “The Poetics of Signal Processing.” differences 
22 (2-3): 31-53. 

Striegl, Libi, and Lori Emerson. 2019. “Anarchive as Technique in the Media Archaeology 
Lab | Building a One Laptop Per Child Mesh Network.” International Journal of Digital 
Humanities (April). doi:10 .1007/s42803-019-00005-9. 

Taylor, Timothy Dean. 2016. Music and Capitalism: A History of the Present. Chicago: 
University of Chicago Press. 

Teboul, Ezra J. 2015. “Silicon Luthiers: Contemporary Practices in Electronic Music Hard¬ 
ware.” Master’s Thesis, Dartmouth College. 

-. 2017a. Running List: Non-Cis-Men Makers of Electronic Music Hardware, May. 

Accessed October 15, 2019. https : //redthunderaudio . com/post /160747721237/ 
running-1ist-non-cis-men-makers-of-electronic. 



150 


Teboul, Ezra J. 2017b. “The Transgressive Practices of Silicon Luthiers.” In Guide to Un¬ 
conventional Computing for Music, edited by Eduardo Miranda, 85-120. New York: 
Springer. 

-. 2018. “Electronic Music Hardware and Open Design Methodologies for Post-Optimal 

Objects.” In Making Things and Drawing Boundaries: Experiments in the Digital Hu¬ 
manities, edited by Jentery Sayers, 177-184. Minneapolis: University of Minnesota Press. 

-. 2019. “Pulse Music” (Reich 1969) & “Pulse Music Variation” (Teboul 2019). Ac¬ 
cessed May 25, 2020. https://redthunderaudio.com/post/185229047757/pulse- 
music-reich-1969-pulse-music. 

-. 2020a. “Bleep Listening.” In Handmade Electronic Music: The Art of Hardware 

Hacking, Third Edition, edited by Nicolas Collins. New York: Routledge. 

-. 2020b. “Hacking Composition: Dialogues With Musical Machines.” In The Blooms- 

burry Handbook of Sonic Methodologies, edited by Marcel Cobussen and Michael Bull. 
New York: Bloomsbury Academic. 

-. 2020c. “Pop Rock.” In Tonebook, vol. II. Brooklyn, N.Y.: Inpatient Press. 

Teboul, Ezra J., and Sparkles Stanford. 2016. “Sonic Decay.” International Journal of Zizek 
Studies 9 (1). 

Texas Instruments. 1972. The LM3900 A New Current-Differencing Quad of Plus or Minus 
Input Amplifiers. Application Note 72. 

The User. 1999. Symphony for Dot Matrix Printers. Accessed June 6, 2020. http://www. 
theuser.org/dotmatrix/projinfo/en/frame_index.html. 

Theberge, Paul. 1993. “Random Access: Music, Technology, Postmodernism.” In The Last 
Post. Music After Modernism, 150-182. Manchester: Manchester University Press. 

-. 1997. Any Sound You Can Imagine: Making Music/Consuming Technology. Hanover: 

Wesleyan University Press. 

Tim. 2009. Fun with MIDI, CNC and Vector Maths (Mid2cnc.Py), April. Accessed June 6, 
2020. http://tim.cexx.org/?p=633. 








151 


Tudor, David. 1976. “The View From Inside.” David Tudor Archive, Box 19, Folder 11, Getty 
Museum Research Library, Los Angeles. 

Tudor, David, and Victor Schonfeld. 1972. “From Piano to Electronics.” Music and Musicians 
20 (12): 24-26. 

Turner, Fred. 2010. “The Pygmy Gamelan as Technology of Consciousness.” In Paul De- 
Marinis: Buried in Noise, Kehrer Verlag, edited by Ingrid Beirer, 22-31. Heidelberg. 

Tyne, Gerald. 1977. Saga of the Vacuum Tube. Indianapolis: Sams. 

Vlach, Jin, and Kishore Singhal. 1983. Computer Methods for Circuit Analysis and Design. 
New York: Springer. 

Von Hilgers, Philipp. 2011. “The History of the Black Box: The Clash of a Thing and Its 
Concept.” Cultural Politics: an International Journal 7, no. 1 (March): 41-58. doi:10. 
2752/175174311X12861940861707. 

Von Hornbostel, Erich M., and Curt Sachs. 1961. “Classification of Musical Instruments.” 
Translated by Anthony Baines and Klaus P. Wachsmann. The Galpin Society Journal 
14 (March): 3-29. 

Wark, McKenzie. 2004. A Hacker Manifesto. Cambridge, MA: Harvard University Press. 

Warner, Daniel. 2017. Live Wires: A History of Electronic Music. London: Reaktion Books 
Limited. 

Werner, Kurt James. 2014. “The TR-808 Drum Machine & Its Emulations.” In Presented at 
the Bone Flute to Auto-Tune Conference. California, Berkeley, April. 

-. 2017. “Virtual Analog Modeling of Audio Circuitry Using Wave Digital Filters.” 

PhD diss., Stanford Lhhversity. 

-. 2018. ‘“A Punchier Crisp Bass’: Shifting Pitch in the Roland TR-808 Bass Drum 

Circuit.” 

Werner, Kurt James, and Jonathan Abel. 2016. “Modal Processor Effects Inspired by Ham¬ 
mond Tonewheel Organs.” Applied Sciences 6, no. 7 (June): 185. doi:10 .3390 / app 


6070185. 




152 


Werner, Kurt James, and Mayank Sanganeria. 2013. “Bit Bending: An Introduction.” In 
Proc. of the 16th Int. Conference on Digital Audio Effects, 8. 

Wershler, Darren. 2016. Boxes and the Work of Articulation, November. Accessed Septem¬ 
ber 28, 2019. https://residualmedia.net/boxes-and-the-work-of-articulation 
/. 

Westcott, Matt. 2015. Gasman/MIDI-to-CNC. Accessed June 6, 2020. https ://github. 
com/gasman/MIDI-to-CNC. 

Wilf, Frederic M. 1986. “A Chip Off the Old Block: Copyright Law and the Semiconductor 
Chip Protection Act.” The John Marshall Journal of Information Technology & Privacy 
Law 7 (2): 4. 

Winner, Langdon. 1993. “Upon Opening the Black Box and Finding It Empty: Social Con¬ 
structivism and the Philosophy of Technology.” Science, Technology, & Human Values 
18 (3): 362-378. 

Witmer, Robert, and Anthony Marks. 2001. “Cover.” In Grove Music Online. Oxford Uni¬ 
versity Press. doi:10. 1093/gmo/9781561592630.article.49254. 

Wittje, Roland. 2013. “The Electrical Imagination: Sound Analogies, Equivalent Circuits, 
and the Rise of Electroacoustics, 1863-1939.” Osiris 28 (1): 40-63. 

-. 2016. The Age of Electroacoustics: Transforming Science and Sound. Cambridge: 

MIT Press. 

Zadeh, L A. 1962. “From Circuit Theory to System Theory.” Proceedings of the Institute of 
Radio Engineers 50 (5): 856-865. 

Zielinski, Siegfried, and Geoffrey Winthrop-Young. 2015. “AnArcheology for AnArchives: 
Why Do We Need—Especially for the Arts—A Complementary Concept to the Archive?” 
Journal of Contemporary Archaeology 2 (1): 116-125. 

Zimmermann, Basile. 2015. Waves and Forms: Electronic Music Devices and Computer En¬ 
codings in China. Cambridge: MIT Press. 



APPENDIX A 
Interview with Ralph Jones 


This interview was originally done by Matt Wellins in 2010. I transcribed it from a publicly 
available recording in January 2020, and this transcript was expanded by Ralph Jones shortly 
after. 

M: Were you interested in electronics before yon went to school? 

R: No, but I always loved music. My first instrument was the Ante. 1 played French horn in 
the high school band and, like most kids of my generation, I had a rock band — I played lead 
guitar and sang lead. When I entered Hamilton College as a freshman, 1 was going to be 
premed; my father was a doctor, and my parents wanted me to follow suit. That didn’t last. 
Music was really my passion, so for my sophomore year 1 transferred to SUNY at Buffalo as 
a music major. 1 played horn in the school orchestra throughout my undergraduate years, 
but 1 rapidly became interested in fiddling with a tape machine in my living room, cutting 
and splicing tape, playing sounds backward, experimenting with Radio Shack contact mics, 
that sort of thing. And I was listening to artists like Jimi Hendrix and Pink Floyd, then 
to Subotnik, Berio, Pierre Schaeffer, Henri Pousseur and so on. So, my ears were being 
stretched in all directions, and I found it all really exciting. SUNYAB had one of the earliest 
electronic music studios, with a huge Moog modular synthesizer, an EMT plate reverb, a 
Bode ring modulator and a five-foot-tall vocoder built by Moog. 1 took the undergraduate 
class in electronic music and learned the basics of music synthesis in that studio. [207] 

On graduation in 1973, 1 enrolled in graduate school, and that summer 1 attended 
New Music in New Hampshire, a workshop/festival program in Chocorua, NH created by 
composer and flutist Petr Kotik. The instructors were all prominent avant-gardists of the 
time: Petr, David Behrman, Julius Eastman, Gordon Mumma, Fred Rzewski and David 
Tudor. 1 took Mumma and Behrman’s workshops on basic circuit design and construction, 
and was one of the students who studied with Tudor, learning his piece “Rainforest”. At the 
conclusion of the festival, we did a performance of Rainforest that expanded David’s tabletop 
version to larger objects that we hung from the rafters of the old wooden barn where we did 
all our performances. The experience was absolutely life-changing for all of us, including, 1 
think, for David himself. We all knew that, together, we had created something very special, 


153 



154 


and we knew that it had to have a life beyond that one performance. We met afterward and 
decided to form a group under David’s aegis. This became Composers Inside Electronics, 
dedicated to performing the piece that David titled Rainforest IV. 

That fall, in my Erst semester of graduate school, I became a member of the Creative 
Associates. It was the performing ensemble for the Center for Creative and Performing Arts, 
an organization founded by composer Allen Sapp and composer/conductor Lukas Foss, and 
headquartered on the SUNY at Buffalo campus. At that time Foss was also Music Director of 
the Buffalo Symphony Orchestra. It was a terrifically exciting time. Modern composers came 
through from all over the world for residencies and performances in the Center’s Evenings 
for New Music concert series, and I was responsible for all the electronics for the ensemble. 
I was privileged to tour and perform with them for three years. 

M: When you were at Buffalo you studied with Morton Feldman, Julius Eastman and 
Lejaren Hiller. Were they influential for you? I know Feldman was sort of not a big fan of 
electronic work... 

R: Not particularly, though he actually did make one piece for tape in ’53 called 
“Intersection”. He was a terrific teacher. I studied with him, studied with Eastman, and all 
during that time I was writing pieces and having them performed. I made not only electronic 
music but also instrumental music. I wrote a string quartet, a piece for choir and tape based 
on the Fibonacci series, a piece for flute called “Saturday Afternoon 5 O’clock” that was 
adopted by the flutist Eberhard Blum; he performed it all over the place, in concert halls 
and art galleries, but also in coffeehouses and other more informal settings. 

M: Would you consider your work at the time to be influenced by your professors or 
were you trying to break out from some of their ideas? How is the dialogue there in terms 
of your creative work? 

R: Well, Julius was influential in expanding my ear: he was an incredibly great artist 
and he was not afraid of rough sounds. Feldman... there was always that other, meditative 
side of what I was doing, the best example of which is the flute piece I mentioned, which is in 
what some call “vertical time” — that is to say that it doesn’t progress as time passes. Morty 
had quibbles with some of the things that I did, but he taught me a lot. Jerry Hiller was best 
known for computer music. I had some exposure to computers in high school, learning to 
program an IBM mainframe with punch cards, but I was never really interested in computer 
applications for music. To me, computers seemed to be tools for absolutely deterministic, 



155 


sample-accurate work, while I was going in the direction of indeterminacy. The flute piece, 
for example, is notated as a mandala-shaped mobile, with durations that come from the 
performer’s breath. And God, what a lot of work computers entailed, especially in that 
era: writing programs in Fortran or, quite commonly, machine language. It all seemed 
very remote from music. But Jerry encouraged my experiments: he was my master’s thesis 
adviser, and my thesis project was to design and construct a transposing microphone for 
airborne ultrasound that enabled me to explore some of that realm above the upper limit of 
human hearing. I approached it as a kind of ear training, and a search for novel naturally- 
occurring sounds like insect songs. 

I was really interested in trying to find ways to create musical sounds with the complex 
qualities of natural sounds, and the electronic instruments of the time were only really 
capable of producing mathematically pure waveforms. You started with one of the four basic 
waveforms, and you could do things with frequency and amplitude modulation, filtering and 
things like that, but everything still had a kind of pristine quality to it. I wanted to get 
sounds with more juice. So, I ended up studying privately with Bob Moog for a couple of 
years, learning about circuit design for music, then started later to apply what I learned to 
making circuits with complex multiple feedback loops and things like that to create much 
more raw, complex, interesting sounds. It all dovetailed nicely with, and built upon, the 
things I learned from Behrman and Alumina and Tudor at Chocorua. 

M: Aloog’s work is usually considered to be fairly conservative in that vein, was he 
interested in pursuing those paths with you? 

R: He was completely open and wonderful, very creative and very open-minded. And 
also, as you do when you’re young, I was experimenting with a lot of different things. Under 
his tutelage I made a piece called “Circuitree” which comprised five individual units that 
were sensitive to wind, changes in temperature and patterns of light, and were designed to 
be hung in trees and produce sound in response to their environment. 

M: One of the things I was curious about is how you got equipment back then. What 
type of transducers you got and how you integrated that stuff, getting wind, and how you 
converted that into sound. 

R: It was pretty simple stuff. I used wind as a trigger, sensing it with a mercury switch 
mounted on a thin, springy wire with fins to catch the breeze, and in those days, it was pretty 
easy to get photocells and thermistors that could sense light and temperature. I haunted 



156 


surplus stores for parts and adapted them to my purposes. 

M: Moog probably also had a studio stocked with that stuff, right? 

R: He knew where to find it! 

M: Were yon reading periodicals like Electronic Music Review or Source? 

R: To an extent, but not a lot. I'd run across them, but I didn’t read them religiously 
by any means. 

M: So, you had contemporaries at school with you that you discussed this stuff with? 

R: Yeah, you know that was the real thing, I was exposed to so many different ideas 
and schools of musical and artistic thought. There was a very active organization at the time 
called Media Study Buffalo which was also associated with the university. The man who 
founded it, Gerry O’Grady, was a professor at the university, but the organization was an 
independent nonprofit. I got hooked np with him and I did experimental video, studied with 
Woody and Steina Vasulka, and people like Stan Brakhage were coming through and it was 
just an unbelievable melting pot of wonderful composers from Spain, or Japan, or elsewhere. 
Each summer, there was a new music festival that Morty was very involved with for a couple 
of years. The Creative Associates presented extensive programming at the Albright Knox 
Art Gallery and they attracted people from all over the place. So, you know, if was dehnitely 
a glut of influences. Nam June Paik, John Cage came through, but in the end the person 
who influenced me the most was David Tudor. 

M: You went to the Chocorua festival, with Mumma and Behrman, but how did you 
get teamed up with Tudor specifically? 

R: I think we just gravitated toward one another. I was fascinated by his ideas, and 
the things that he and Mumma and Behrman were teaching were right in line with what I 
was interested in at the time, and they kind of interwove their teaching. By that time, I had 
built several circuits, and we made preamps that we used in rainforest. I just thought David 
was a fascinating man. His idea of using physical objects as kind of extremely active filters 
and loudspeakers... it was just magical. 

M: What was the preparation process for that? Did you go out and collect materials? 

R: We all went out on forays to find stuff. The festival was held in the White Mountains, 
there was an inn with a big old barn, and there were things lying about. One of the things 
we picked up there, that John Driscoll developed and became an anchor object in Rainforest 
IV, was a huge iron wagon wheel rim. I believe David and John found it on the grounds of 



157 


the inn, and I vividly remember the two of them laboring to drill a hole in it to mount the 
transducer, one working the drill and the other feeding cutting oil to the bit. It took a long 
time. So, yeah, we were finding stuff all over the place, junk yards and so on. 

M: What were relationships like in CIE? How did you interact with each other? Was 
Tudor around a solid period of time, or was he travelling, so there would be lapses? I don’t 
quite understand how it operated as a collective. 

R: We came together for residencies, workshops, performances, and everyone had their 
own lives outside of that. But we had forged a strong bond in that concentrated time in 
New Hampshire; it was like a family — you see an old friend you haven’t seen in a year, 
and you just pick up as if it’s been no time at all. You’ve even each grown in similar, or 
complementary, directions. There was such an active scene back then that we’d see each 
other all the time. I’d go to New York regularly for concerts in Soho, or to see what was 
cooking in the galleries. I arranged a few performances in Buffalo with funding through the 
New York State Council for the Arts, and through Media Study Buffalo. 

M: I read about a residency you guys did up there where everyone gave presentations 
and there was some National Endowment funding. 

R: We did a couple of residencies, one where we worked developing, and David was 
involved in this, directional loudspeakers. It never really resulted in any group performances, 
but John Driscoll did some serious work that grew out of that residency: he made a few 
pieces with motorized directional loudspeakers that he developed there. Media Study was in 
this building that had at one time in its history been a hotel, then a him and TV production 
facility with a with a huge, twenty-four-foot high sound stage lined with fiberglass insulation. 
It had a separate control room looking out on the sound stage, where the audio recording 
booth was located. That was where did the directional speaker development and testing. 
And it also had a big indoor pool in the basement, which had been drained dry for years. The 
walls were also high in there, and it was obviously a very live space. Some of us from CIE 
came together for a residency there where we made pieces specifically for that space. And 
Bill Viola brought in the terrihc Yoshi Wada, who was just astonishing. He’s an industrial 
plumber — poured molten lead, stuff like that — and he made a huge bagpipe instrument. 
It was just gorgeous, and its sound was really impressive and powerful. 

M: When you were developing actual technologies, what was the process like? With 
the directional speakers, for instance, how did you collaborate in testing and working? 



158 


R: Rainforest established a way of working where we started with a common concept 
and then each made our own experiments. With the directional loudspeakers, we weren’t 
working collaboratively to develop a single loudspeaker. We each explored ideas for achieving 
directional control more or less on our own, and we’d chat about things and ideas would 
bounce back and forth. It was kind of like making a dinner, one person making the salad, 
another a meat course, another the vegetables, and it would all come together. Incidentally, 
food was actually a big part of our familial interaction. David was an incredible, legendary 
cook, and Billy was pretty darn good, too. 

M: In regards to that specific project, what were you looking at, yourself? 

R: I was looking at principles from antenna theory. One obvious one is a parabolic 
reflector. For my Master’s thesis on naturally-occurring ultrasound, I built an ultrasound 
detector and transposer circuit that transposed sound through heterodyning, much like in 
an FM radio. Ultrasound is rapidly absorbed in air, so it doesn’t travel far and the sound 
intensity falls off rapidly over relatively short distances, so I needed a very sensitive mic and 
I settled on using a parabolic reflector to collect and focus the vibrations on a wide-range 
electret element. I was friendly with Nina Freudenheim, who had a wonderful gallery and 
was very active in the Buffalo scene, and her husband was a businessman whose factory had 
vacuum-forming technology. I approached him about the project and he graciously agreed 
to make reflectors for me. In this directional loudspeaker project, I mounted a piezoelectric 
tweeter at the focus of one of them and it worked spectacularly. There’s another type of 
high-gain antenna that is like a rod with a series of evenly spaced disks mounted along it 
perpendicular to the axis of the rod; it’s called a multi-element dipole. I prototyped one and 
drove it with a dome tweeter having a hyperbolic reflector around it, cut off at the focus 
with the tweeter mounted on that plane. It was pretty effective, but only within a narrow 
frequency range. That’s a primary consideration in trying to make directional speakers: 
because of physical limitations, the directional behavior inevitably breaks down outside of 
a certain range. Audible sound encompasses a huge range of wavelengths, from tens of feet 
down to fractions of an inch, so different techniques are most usable at different frequencies. 

M: In the microphone work, where you were transposing ultrasounds down into the 
audible range, was that... 

R: That was absolutely fascinating. I was living in a former Baptist church on Grand 
Island, a big island in the Niagara River midway between Buffalo and Niagara Falls. It was 



159 


a terrific environment for me, there was a lot of space and 1 had room to work with pretty 
big rainforest objects, hold rehearsals, and so on. The island was a mix of suburban housing 
and open space, and the church bordered a large, grassy held. Perfect environment for held 
recordings. I’d go out in that held and hear almost nothing with my ears, then turn my 
equipment on, scan around with the parabolic, and I’d hear a cacophony of sliding pitches 
and roars and rasping rhythmic sounds, all kinds of stuff. An entire world of insect songs 
beyond our hearing range. 

M: How did it work? 

R: I found a miniature electret condenser transducer that was hat to 100kHz, made 
by a company called Knowles Electronics. I made a housing for it and its preamplifier using 
brass tube stock so that it could be mounted it at the focus of a parabolic rehector. I 
designed a circuit to heterodyne the signal up using an upshifting variable-frequency carrier, 
followed by a hxed bandpass hlter so that, by sweeping the carrier frequency, I could select 
any 20khz-wide frequency band between 20kHz and 100kHz. That stage was followed by 
another to heterodyne the signal back down against a hxed oscillator. 

M: Interesting. Did you also conceive Star Networks around that same time? 

R: That was a bit later. CIE was scheduled to present a series of concerts at the Kitchen 
in New York City, and I wanted to make a new piece for the event. The hrst performance 
was on one night in that series. Everyone in the group built their own instruments according 
to the guidelines that I outlined, and Tudor taught me a lot with his contribution. I was 
building my various networks in boxes and interconnecting them with patch cords, but he 
just mounted all his passive components on the bottom panel of a wooden drawer, like a big 
open breadboard, and I thought “geez that really is the right way to go.” So, later, I built 
my instrument that way with the help of my friend, composer/percussionist Tim Leaphart, 
mounting it in an empty molded plastic Moog Sonic Six case that I found in a local surplus 
store. 

M: Was the open face circuit design a thing that other people were doing at that time? 
It seems like it’s become a popular thing since then. .. 

R: It’s a take on the principle of breadboarding, and that’s what Star Networks is 
about: arranging a set of passive components so that you can freely connect them in an 
ad-hoc, improvised network. You make a sort of web with many nodes, each of which has 
at least three branches, creating a mesh of series/parallel interconnected components. Each 



160 


branch is a passive component like a resistor, capacitor, inductor or variable inductor, diode, 
pot, transformer, or what have you. Then you pick a point to feed and one to pick up from, 
and you connect a high gain preamplifier to it. This makes a feedback network around the 
preamp that has multiple paths through it. 

M: Were there recordings made? 1 haven’t heard a lot of material. 1 know there were 
some from the Kitchen released a while back. 

R: The concerts at The Kitchen were recorded, but I don’t think they ever released Star 
Networks, and I’m fine with that. I wasn’t really satisfied with the performance, even though 
others thought it was okay; I thought it went much better in rehearsal. It was a new work 
in progress at that point. And any time you introduce that level of indeterminacy, you’re 
forcing yourself to accept the outcome — even if it’s not a satisfying listening experience. 

I performed Star Networks a couple of times after the Kitchen series as a duet with 
Tim Leaphart, and then I attempted several solo performances. Doing that piece solo was 
the most difficult thing I’ve ever done. I had to keep track of at least four complicated 
sound-producing circuits and mix the whole thing while pulling one circuit at a time out of 
the mix to retune it. But pulling a sound meant I had to go back in and redo the circuit, 
retune, find something, and put it back out there within a musically appropriate interval, so 
I didn’t have 20 minutes to change one sound, right? More like a few minutes total. It was 
extremely taxing. 

Nonetheless, I performed it many times, and there are recordings. But in those days 
often the recordings for live electronic music were something of an afterthought. You’d end 
up with a kid who was a hanger-on at the venue, running cables and stuff like that, who 
would be the one setting up a tape machine — usually a cassette recorder — with maybe a 
couple of mics on stage or a Sennheiser binaural in the audience, so you got a lot of room 
sound. Or a split from the house feed, so it was completely dry. Setting levels was always a 
problem, and the kid would usually get freaked when the sound was loud and radically duck 
the volume in different places. 

M: They still do that! 

R: I’m sure! At any rate, it was really difficult to get a good recording; just in the past 
few months I’ve been going through the live recordings, and generally they’re terrible. The 
quality isn’t very good, the acoustics are not great. I’ve been out of playing music for quite 
a while, I’m just beginning now to work on it again, and one of the things I’m considering 



161 


is making a studio version. It’s going to take a long time, using multitracking and really 
spending some time with it — maybe take six months or more just to make the circuits and 
record them properly, then working on mixing it appropriately. So, it’s a challenge. 

M: Do you still have all your equipment? 

R: From the old days? I do! Most of it, at least, though there was one thing that was 
destroyed. I did a piece with four digital filters implemented using CMOS analog switches. 
And that instrument got destroyed before I could make a proper recording. 

M: What was the arrangement with the filters exactly? 

R: It was a classic implementation that I read about in a magazine article or an 
electronic handbook or something. There’s a way of making a filter by using a string of analog 
switches activated sequentially by a shift register, each of which connects to a capacitor to 
ground. The input signal sees all of those switches, so that when a particular switch is 
closed the cap charges to the input’s instantaneous voltage and then holds that voltage 
when the switch opens. That’s followed by a FET or MOSFET amp that presents a high 
input impedance to buffer the output. You clock the shift register at a given audio-frequency 
rate, and the circuit acts like a resonant tube. It rings like a bell at a fundamental frequency, 
determined by the number of switched stages and the clock rate, and at integer multiples of 
that fundamental. I drove it with swept sine waves. It made these beautiful, bell-like tones 
in a harmonic series. Remember when we were talking about Feldman and I referred to a 
completely different strain in my work, exemplified by my flute piece? This was another such 
piece of “pretty” music. The flute piece was a deconstruction of a single chord from Gagaku 
music, and the clock oscillators here were also tuned to the pitch relationships in that chord. 

M: Jumping back to Star Networks, I read that one of the specifications is that you 
wanted to move the oscillations into unstable territory, and I can imagine that being difficult 
in real time. 

R: It can be, but it’s the very essence of the piece. The goal is to tune for “tipping 
points” of instability where the network comes alive and “sings” unpredictably of its own 
accord. In other words, you vary the preamp gain and polarity to find a point where, 
given that there are a number of paths that the feedback signal can take, it starts to flip 
unstably among them. Adding a second, different feedback loop around a second preamp, 
independently balancing the gain and polarity in the two loops, makes it easier to induce 
instability. That’s the entire idea, to search for a tuning where each oscillating circuit is 



162 


“stably unstable”, completely unpredictable from moment to moment but bouncing around a 
set of possible states that are determined by the network. It’s the very definition of chaotic 
system behavior. It’s like, by tuning it properly, you’re freeing it to randomly explore all its 
possible states. 

M: It seems Tudor’s work at the time was also very preoccupied with unstable oscil¬ 
lation, as a means of getting those natural sounds you were talking about. What was the 
dialogue about that? 

R: We never talked about it. 

M: Really? 

R: Yeah, though Star Networks definitely reflects David’s influence on me. I don’t 
think I ever would have had the idea if I hadn’t met David and worked with him all those 
years. 

He once described the piece to Cage in my presence. I performed it for a Cunningham 
company event at their loft in New York. Merce had had a knee operation, and it was 
his first time dancing since the operation. Everyone was very curious about what would 
happen. Cage and David showed up, and David brought Cage over afterwards, showed him 
my instrument and just very briefly described how the work was true randomness, or what I 
later described as “surfing the chaos.” David didn’t talk much about things like that. If you 
got him on Indian cooking, he could go on for hours, but when it came to musical aesthetics, 
all you could get out of him was maybe a cryptic koan-like question that you’d spend the rest 
of your life trying to understand. It’s one of the reasons there are so many mysteries about 
his work that young people are trying to unravel from the instruments, notes and diagrams 
that he left behind. There are boxes of his... no one knows what they are, what they were 
intended to do or what they actually did. They have a couple of RCA jacks on them and a 
bunch of parts inside. You draw a diagram of it, and it doesn’t seem to make sense. 

(laughs) 

M: So, when you went on tour with the Merce Cunningham dance company, was it 
with Star Networks? 

R: I never toured with Cunningham, just the one performance at his loft space. 

M: But you did take Star Networks to Europe you said? 

R: Yeah, I performed at the Ars Electronica Festival. Just a brief version, actually. It 
was for a competition, and there was a time limit... which, as I recall, I exceeded because 



163 


I got carried away. 

M: Were you touring alone at that time? 

R: No, with CIE. We did Rainforest at that festival, if I recall correctly. It was a long 
time ago. 

M: The group ceased at some point? Was there a point where you guys stopped 
collaborating in that capacity? 

R: Yes. Everybody’s paths were diverging and the times were changing. People were 
doing different things. CIE had had its time, and people were moving on. John Driscoll 
was pursuing a career in the computer game industry, Phil was advancing in his corporate 
software career, Billy’s obviously had an extremely active and successful career. And then 
David died. We came back together to perform Rainforest at the Judson Church memorial, 
and it was like everybody realized this was a masterwork of 20th century music. Somebody 
had to come together and preserve it, and we wanted to perform it again and revitalize it. 
So, we did the thing at the Tudor symposium at the Getty, and I know some of the members 
performed it at the Kitchen, though I didn’t get to that performance. 

M: Did you begin to investigate digital electronics as they became more available? 

R: I haven’t very much, to tell you the truth. I’ve always been an analog guy. Actually, 
what happened to me was I moved to California. I had been sort of bumping up against a 
ceiling in Buffalo and I wanted a change in my life. There were always people that I knew 
out here, so I thought I’d move here and maybe get a gig at Mills College or something like 
that. Of course, nothing like that was available to me, and the scene was very different from 
what I was used to, so I tiled floors for a while. Then I met John Meyer and heard his ACD 
Studio Monitor system that he had designed in a residency in Switzerland. He was going 
to start a company and import it into the States. And I had never heard anything like it 
before. I kept in touch with him and became his Erst employee, worked with Meyer sound 
for quite a long time. At the beginning I was still performing and making pieces, doing stuff 
out here on the west coast, and then at some point I moved to Los Angeles and I started 
doing producing and writing music for him and television. 

M: I wanted to ask you about Slumber Party Massacre. Was that your first scoring 
experience? 

R: Yeah, that was my first score. They had a tiny bit of money for music, so I went to 
the Macy’s electronics department and I bought a little Casio synth, got a hold of a couple 



164 


cymbals and used crystal wine glasses from a second-hand shop. I rented a reel-to-reel four 
track and did the score in my home. 

M: I noticed the director had the same last name. 

R: Yeah, that’s my sister, Amy Holden Jones, now very successful in him and television. 

M: So, that’s a substantial shift aesthetically? 

R: Well, of course! I don’t know what to say. It’s not in the same context as the 
concert work that I did, but it was an interesting thing to do. 

M: I’m curious about your approach. It’s very interesting to be able to straddle different 
areas like that. You were ready for a change? 

R: I was ready to make some money. More importantly, it was my sister’s first experi¬ 
ence directing as she was paying her dues under Roger Gorman, breaking into the industry. 
And it was challenging. Just try to watch a him like that without the sound. Someone 
goes out the back door of a house, empties the garbage and goes back in, and that perfectly 
ordinary scene is supposed to be fraught with uncertainty and the threat of death at any 
moment. That all comes from the music. 

M: When you were in San Francisco did you get involved with anyone from the Mills 
Tape Music center? 

R: I ran into some of them, not a lot. I had done a couple of concerts at Mills when 
I was touring and I knew the people there. Maggi Payne of course, Behrman, and Paul de 
Marinis, kept running into Paul for a long time. Have you ever talked to him? 

M: No, but Pve read a lot about him. 

R: You really should have the experience of talking to him. He’s startlingly intelligent, 
in a very facile way. By which I don’t mean superficial, I mean effortless. He has an 
encyclopedic memory and very wide experience, quite an intellect, ideas just at his fingertips. 

And he’s a really nice guy. You’d enjoy talking to him. Paul was one of those people 
who would drop in and out of the CIE framework, come in and participate in some of the 
performances. 

M: who else was involved? I have a list of maybe 4 or 5 people. 

R: Who? 

M: Bill Viola, Phil Edelstein, Martin Kalve, John Driscoll, that’s all I’m aware of... 

R: And Linda Fisher. That was pretty much the core group that came out of the New 
Hampshire festival. But CIE has always worked with others, as well. In addition to Paul D, 



165 


Nic Collins and Ron Kuivila were very active with the group since the early days. 

M: To jump back a little bit, were you aware of Mumma’s work with horn, since you 
had a background in French horn? 

R: Yes, I was. 

M: Did you combine acoustic instruments and electronics at any point? 

R: Not much. I mostly pursued them separately. One exception was that I did a version 
of that piece for solo flute for a single performer with a long tape delay. I’d take two Revox 
reel-to-reels and place them about 10 feet or so apart and run the tape from the supply reel 
of one to the take-up reel of the other, recording on the supply machine and playing back 
on the take-up machine to make a really long, recycling, layering effect. 

M: What was the impetus for that? 

R: Well, the original idea for the piece was that it was to be for five or more players. 
It’s not necessarily easy to get five flute players together. And Blum was very interested in 
making it a solo piece, ft was really nice to make it a solo piece using tape delay. It had this 
quality of memory, things coming back after the long delay, recycling, and gradually fading 
away. Blum also did a studio recording of the piece, and there’s a CD. He did not use a delay 
there, just multitracking. It’s the definitive performance, I feel. He did it shortly before he 
retired from music. 

M: When you were talking about the Meyer speakers, you said it was very surprising 
to you how they sounded — could you talk a little bit about what you mean by that? 

R: Well, it was the first time that I heard a really linear loudspeaker system that 
didn’t have distortion products. That was confusing at first, because there would be less 
high-frequency components in the material, and I’d be thinking wait, did the tweeters go 
out? And then a cymbal would crash or there’d be a triangle or something, and I’d go “yeah, 
it’s there!” And the imaging was, for the time, astonishing. I don’t know if you’ve heard 
his products... Meyer Sound is the premium professional speaker manufacturer, far above 
the rest of the field. When I was with them they developed a wonderful nearfield monitor 
called the HD-1. Eight-inch low frequency cone and a one-inch dome tweeter, self-powered, 
phase corrected, and each unit was individually tested and aligned in an anechoic chamber. 
It sounded fabulous, the most accurate monitor of its time by far. I learned an incredible 
amount about acoustics from John. He’s one of the true geniuses. There are not many 
true geniuses in the world, and I was privileged to have spent time with and learned from 



166 


him. He’s one of those rare people who have an intuitive feel for mathematics... really 
fascinating. I was drawn to him because I have a real sensitive ear, and I stayed for the 
prodigious intellectual stimulation. Wherever the pro audio industry is at a given time, 
John is years, even decades, farther ahead. 

M: It’s interesting coming to that from a background in rainforest, where you’re specif¬ 
ically limiting the speaker responses in certain ways. In a lot of ways, it’s a criticism of 
recorded music. 

R: It is. You know, we had little interest in recorded music at the time. That’s one of 
the reasons why there aren’t any decent recordings. David never really wanted the speakers 
to be precise and calibrated, like John Meyer’s products. Rainforest was the embodiment 
of David’s philosophy that the speaker is an instrument, just like a violin or piano. It has 
its own characteristic sound. And it is emphatically live electronic music, as distinct from 
playback of a recording; that was the point. Many of us in that time tried to make music 
that was alive, in the sense that it was different each time it was performed. My flute piece 
uses a mobile score, like Earle Brown’s work. It’s intended to be different, from moment 
to moment, every time it’s performed. And obviously Star Networks is completely chaotic. 
Every time I've played that piece, utterly different sounds have emerged, though the piece 
does have a characteristic sound. 

Now, I’m experiencing a change in my thinking. I don’t know if it’s simply that I 
have fewer opportunities to do concerts, or because I’m 58 years old and not really in shape 
to do the kind of touring I used to do. It would be difficult now to live out of a suitcase. 
Exciting when you’re 20, but a strain at my age. But just in the last year, I’m experiencing 
a kind of shift in my thinking, and I’m drawn to actually composing, in the sense of putting 
things together, and working with fixed media, what we used to call “tape music” though, 
increasingly, tape isn’t involved at all. Maybe that’s just me getting more old-fashioned as I 
get older, but I think it also has to do with the way our consumption of music has changed. 
People increasingly listen to music only in pre-recorded form, on their phones, using earbuds. 

M: It’s interesting now that the equipment can handle the sounds you’re producing 
in different ways. Is the relative lack of multichannel systems, that it’s not the common 
practice, dissuading in any way? You used 5 different speakers for all the oscillations on 
average... 

R: Well, there are things you can do. There are recording tricks you can use that make 



167 


it seem like sounds are completely outside of the field between the loudspeakers, for example, 
and I’m really interested in exploring things like that. And a few more critical listeners have 
surround systems, so maybe that will take off, though I doubt most people are willing to 
take on the work and expense. But I wouldn’t call it a deterrent. 

M: Were there other electronic pieces that you did in that time frame that you’re 
interested in revisiting? 

R: Well, I was going to try to revisit the digital filters, but the original instrument is 
gone. I thought maybe I could do something with the recordings, but they’re just junk. 

M: You could still get the CMOS ICs! 

R: Yeah, I could recreate the instrument, I suppose, but it was pretty complicated, 
four separate densely-populated boards on a backplane. 

M: I noticed a Ralph Jones was doing jazz-related stuff, is that you? 

R: Not me, no. I like jazz to listen to, but I never acquired that kind of facility with 
an instrument. And by the time I got into electronic music I was interested in making 
instruments you couldn’t control — in fact, ideally, I didn’t want to “play” them at all. I was 
a pretty good horn player, but classical only. Someone tried to get me into a jazz ensemble 
one time in college and I gave it a try, but I sucked. Completely. It just wasn’t my thing. I 
admire people who can do it. 

M: There was one other thing with the Buffalo residencies. You’re listed as giving a 
presentation about designing collaborative compositions. I was wondering if you could say 
more about those ideas and the presentation, maybe. 

R: Well, the basic notion there was, how do you find a structure to make music in a 
really collaborative fashion if you’re not going to be playing jazz, using the diatonic scale, a 
kind of a repeating ground of chord changes and modulations — how do you make a structure 
for it? David did it really effectively in Rainforest, which stems from a very basic principle: 
get an object, put a transducer on it, put a contact mic on it — usually phono cartridges but 
not always — and find sounds that complement the resonant characteristics of the object. 
So, it’s an open input, and as a collaborating performer you have incredibly wide latitude, 
within the bounds of taste and style, of course, in finding or creating your input signals. Very 
simple, and yet out of that, an incredible piece that has an absolutely identifiable, unique 
characteristic, a personality unlike anything else. You go to a performance of Rainforest, 
and it’s not like anything else you’ve ever experienced, and it’s always Rainforest. It’s not 



168 


like one time you go and it sounds like this, another time you go it sounds totally different. 
It’s a consistent experience. So, I was rifling about that in the talk, that David’s open-input 
structure became an ideal basis for collaboration. Obviously, one can have a significant 
impact as a collaborator making Rainforest IV, and that’s what I was after in trying to 
do Star Networks with CIE. Build your own instrument using your own set of components, 
create your own circuits and bring your own characteristic sounds to the table, yet if you 
follow the guidelines, the paradigm should result in a consistent, identifiable result. And 
that did happen; the performance at the Kitchen was not substantially different from when 
I performed it solo: it was the same piece. 

That’s what CIE was really about for me; it was an experiment in collaboration. To 
explore the idea of several minds coming together and seeing how collaboration could produce 
pieces. 

M: Were there other pieces you played as a group? 

R: There were a few, not a lot. Most of our performances, we would come together and 
each of us would do a piece. The dry pool residency was an example. Yoshi did his piece, I 
did mine, Billy did his, Driscoll did his. We tried to create a new collaborative piece with 
the directional loudspeakers but we never really got it off the ground. 

M: What happened? 

R: We were in residency for a couple of weeks, and we made these loudspeakers that 
were, you know... different people had different degrees of success. It’s a difficult technical 
problem, and it really requires more time and effort than was practical for us at the time. 
Probably one of the problems was that we were working only in a nearly anechoic space. To 
make a collaborative piece using directional loudspeakers, you have to have a space that has 
maybe unusual acoustic characteristics, and then you need time to explore how it interacts 
with beams of sound at various frequencies. 

I did end up using one of my directional loudspeakers in the dry pool project. I used 
a parabolic reflector with a piezo tweeter mounted at the focus, and I ran a simple train of 
pulses through the tweeter and slowly panned that beam across the space as I was walking 
in the pool. The train of pulses reflected in a lot of different ways through the highly 
reverberant, cement and tile room, and the interaction between the pulses and the space 
produced pitches. As I swept the space I could make little melodies out of the pitches that 
were different depending on where yon sat. 



169 


M: That sounds really nice. 

R: It was a really sweet piece — 10, 12, no longer than 15 minutes long, and it had a 
distinct Alvin Lucier influence. I started at the shallow end of the pool and slowly walked 
to the deep end, very slowly sweeping the beam of pulses. 

M: Yeah, I was wondering, is it Vespers that has the... 

R: Yes, Vespers is Alvin’s piece using echolocation devices made for the blind to aid in 
navigation. And have you ever seen Bird and Person Dyning? 

M: Heard a recording, but never seen it. 

R: Bird and Person Dyning uses this little battery-operated thing, mounted on a tripod, 
that repeatedly plays a synthesized bird chirp sound, and Alvin wears a Sennheiser binaural 
mic that’s connected through a limiter to play through loudspeakers so that it produces 
feedback. The limiter clamps the feedback signal so that it’s continuous and stable. He just 
walks slowly through the performance space, turning his head and moving his body, which 
changes the pitches of the feedback, and the bird chirp heterodynes against the feedback. 
Similar idea, and it has a kind of theatrical aspect to it. 

M: It seems a lot of the Sonic Arts Union people were interested in theatrical elements. 
And there’s something theatrical in Tudor’s stuff, it’s just almost comically constrained. I 
remember reading somewhere that he said you shouldn’t smile in a performance, but maintain 
a serious demeanor. 

R: That’s very David. If you think about it, all those years in the ‘50s and ‘60s, he 
and John Cage were challenging every convention of “serious” music with pieces like 4’33”, 
or Cage’s Water Walk, which used objects like a goose call and rubber duckie as sound 
sources. People didn’t know what to think, and there was a lot of nervous laughter. So, 
maintaining a serious demeanor was essential, because this was serious work with a wide- 
ranging philosophical underpinning, even if it was intentionally provocative. I think that’s 
also why they always wore black suits, white dress shirts and skinny black ties. Serious 
concert garb. 

Alvin’s work was often theatrical in some way. Like I Am Sitting in A Room, I did 
that piece a couple of times as a live performance. It was an interesting experience; it works 
really well live. I used a reel-to-reel tape machine so between every iteration I had to rewind 
the tape. We didn’t have random access media back then. 

M: Was theatre something you were interested in? 



170 


R: Yeah, sorta! Star Networks was theatrical just because the instrument looked really 
interesting — I would set it up so that the audience could see it — and the task was so 
damned demanding that I had this kind of involuntary intensity just coming off me, as I was 
so absorbed in the constant battle of doing the piece. The year before the New Hampshire 
festival I had a fellowship at Tanglewood, and I worked there on a piece that I never finished, 
a theater piece, staged with lightning, the performers doing rituals, and all that. And you 
know I did a few things with Alvin. We did a performance of Queen of the South together at 
the Freudenhcim Gallery in Buffalo. It was a beautiful gallery with a two-story high space 
that was ringed by a second-story balcony. So, audience members could go up and look 
down on these Chladni patterns. We marshalled forces from the Media Studies group and 
had a video producer who made a really good document with several live cameras. That’s 
one of the few performances where we got really good documentation. 

M: There’s a video? 

R: Yeah. 

M: Do you have it? 

R: No, but Alvin did, and I imagine he still does. He exhibited it several times, I 
believe. He always said it was the definitive performance, which was really flattering and 
gratifying. 

M: You did video stuff as well at that time? 

R: Yeah, I tried. Obviously, by today’s standards, the technical means were quite 
limited. There was barely any equipment available other than cameras and recorders. All 
I did was a sort of “video concrete,” using video feedback and flashlights, different types of 
lights.... it wasn’t a big deal but it was great fun to experiment. I’ve never had a really 
strong visual sense, I’m blind to red and green, so I was never really a visual artist. 

M: I think I’m out of questions. Thanks again, this was great. 

R: Thanks for the interest, I’m really flattered. 



APPENDIX B 

Interview with Tamara Duplantis 


A set of questions were emailed to Tamara Duplantis in February 2020. She responded 
in March with an audio recording and documents. This is a transcript of that recording, 
addressing and expanding upon my questions. 

Tammy Duplantis: starting with your questions, I’m just going to go through them. Is the 
attached diagram a fair and accurate schematization of my work? Yeah, totally. I mean, it’s 
not quite the way that I think about the piece because I guess whenever I was constructing 
it, I was thinking of it a lot more intuitively. I wasn’t as concerned with the specific numbers. 
But yeah, that looks like what the program does, especially the second one, just because I 
feel like Downstream is much more linear piece. The Everything Shuts Down branch, I don’t 
think of that as being this whole sweeping other - it is really just whenever it hits that spot, 
it just turns off. I’m looking back at the first one right now. It’s been a little while since I’ve 
seen this one. Yeah, I like the second one more because it kind of gets across the linearity of 
the piece and of the loop and it has just all the different functions off doing their own thing. 
And probably as I ramble and as you see, I’m going to send a diagram of how it is for me. 
As you see those things, you’ll probably want to edit your diagram however. But it totally 
works. [00:02:55] 

What was most important to me in the making process? So there are a couple of 
different things. I think actually the better way for me to start with this is to start with what 
ideas did I begin with and which features developed experimentally. The initial - honestly, 
when I started programming this program, it was to make something a lot simpler. I was 
trying to make an accordion. I eventually ended up doing that in a later piece, Atchafalaya 
Arcade. I was just trying to make just a plain old musical instrument, but I was doing it on 
a plane without any documentation. So I decided to make this weird little program that I 
thought would kind of sequentially go up just the scale of different notes that the Game Boy 
soundcard could make and would print out the frequency values - the pitch values of that 
pitch. Sorry, I’m going to say that again. I made a program that sequentially went up the 
frequency values that you could in the Game Boy soundcard and I thought would print those 
values to the screen so that I would be able to say, okay, this is about an E, so I know that E 


171 



172 


is like - I need to send 8 D or something to this register so that it will make the sound. Or 
7 D, I don’t know. So that’s what I was attempting to do whenever I started programming 
it. But because I didn’t put the right flag in before - in the print value. Well, two things 
happened: One, I didn’t put in - I missed the flag converting pitch 1 to a decimal value, 
so it came out weird. The second thing that happened is if you look up at the top of the 
program where I have all of the variables, you’ll see most of them are in this ubyte format. 
The ubyte format is kind of necessary for working in the Game Boy development kit because 
it truncates everything to a value 0 through 255, which is an 8 bit value, which is what the 
Game Boy understands, ft’s an 8 bit hand-held console. However, instead of using ubyte for 
the pitch value, 1 put it in as an unsigned int. And what that does is it did not restrict the 
value to 0 through 255 and 1 personally don’t quite understand the process because I’m more 
of an artist. However, my understanding is that by using an unsigned int instead of ubyte, 
as the number ticks up, it goes beyond the bounds of its own spot in memory and starts 
pointing to other sections of memory in the rom. Then it just reads out the program. So as 
you are watching Downstream, what you are watching is an interpretation of the program 
as it runs through memory and spits out what it finds as gibberish. And there are a couple 
spots where that, to me, is pretty clear, just because I played around with the guts of it, 
right? But for example, so there’s the big intro, right? And then there’s this long section 
of white space where there is apparently nothing in those registers. So it just comes up 
with a big long string of nothing. Then after that, it hits this chunk that includes the word 
Downstream in it. I did not intentionally put that there. That is the header information. 
If I were to change the name of the program whenever I compiled it, whatever I compiled 
it as, that would be the text that shows up there. Then later on, once the text from the 
beginning returns, that was also not intentionally put there by me, that was after I had done 
most of the work on the project. I had added this text at the beginning to contextualize it a 
little bit and give it a bit of a narrative. And by adding it into the program, it then caused 
that text to show up near the end of the piece, because that’s where that text was stored in 
memory. [00:10:49] 

This is going to be a bit of a ramble, because everything is so interconnected in this 
piece, right? But so that’s the main visual function. That’s also the main logical function. 
That’s what’s happening. That’s what this program does. Everything else is a visualization 
and a sonification of that. So the visualization, because I forgot to put the flag in there 



173 


initially to give that information context, it spits out whatever it finds there as kind of 
gibberish. It’s not really contextualized in any particular way. And it also reads it in a 
very interesting way too because it will - as it points to different spots in memory, it runs 
into these like, long strings of extremely variable size. And when it ticks up, it truncates 
that string a little bit more and more and more and more. So for example, there’s this long 
section in the middle where it kind of calms down a little bit and it just has the same chunk 
of gibberish going over and over again. Bum, bum, bum, bum, bum, bum, bum....[etc.] 
That’s just one very large chunk of memory. I’m not sure what it is, but if I had to guess, I 
would say it’s probably some of the interactive functions in the program, because whenever I 
would add different interactions to it, it would just get larger and larger. So that’s my guess. 
I’m not really sure. But yeah, it hits this very long string and it takes a long time for the 
Game Boy to print out that string to the screen. And then next time around, it still takes 
an amount of time, but it’s one character less and then one character less, and then one 
character less. And the amount of time that it takes to print onto the screen determines how 
long the program takes before it plays another note. Which is why all the gestures in there 
have this feeling of ramping up. It’s always getting faster and faster and faster and faster 
and faster and faster, because it’s starting with this very long string and then truncating it 
until it’s nothing and then hits another long string and then keeps going until it runs out. 
And then it hits another long string and it keeps going till it runs out. So that’s kind of 
what Downstream is doing as far as durations. [00:14:56] 

For the pitches, they are looping, because whenever I send Pitch 1, which is an unsigned 
int, whenever I send that to the sound - any of the sound registers, it does truncate it to 0 
through 255. Like, it truncates it to 8 bits. So the pitches from the pulse tone, the sounds 
from the white noise filter, the other pulse tone, all of those are repeating. It’s just a very 
sequential thing. And it loops as well, so once it - for example, one Pitch 1 hits like 256, 
when it’s read by the print function, it’s going off into some other spotted memory. But 
when it’s fit into NR-14, for example, it truncates it to the 8 bits and functionally turns it 
back into 0. So all of that to get around to, whenever you see my diagram, the way that I 
think of it is there’s this wheel of pitches, right? There is this wheel of pitches that’s going 
over this very bumpy road of durations. Then whatever pitch meets up with whatever the 
duration is at that time and those are the two named factors into whatever sound comes 
out. It’s very - I guess a way to conceptualize it in more musical terms - and this might be 



174 


a bit of a stretch, but follow me here. 1 think of it as kind of a variation on like, isorhythms. 
So yon have your repeating color and you have your - it doesn’t exactly work, but the way 
that when you think of isorhythms, like you have your talea and your color and they are - 
sometimes they can be offset. And you end up with these looping minimalist pattern kind of 
things. 1 kind of think of it like that, except the talea doesn’t quite work because it’s - the 
talea is ridiculously chaotic, but there is this color that’s just doing its thing and it’s lining 
up with whatever rhythm it lands on. [00:19:01] 

Which is actually is nice tie into, are there pre-existing works outside or inside my 
practice which informed this project? It’s very much a process piece because it’s very - it’s 
a self-referential program and it’s spitting out its own image. I think musically one of the 
things that was on my mind was this piece called Jargo’s Table by Van Stiefel. He wrote 
a piece for Laptop Quartet that we played in the Laptop Orchestra of Louisiana when I 
was an undergrad there at LSU. It was this networked isorhythm generator. You had one 
person picking a talea, one person picking a color, one person picking a timbre, and then 
one person like, choosing - basically dropping the events that would trigger all of them to 
play at once. It really informed a lot of my thinking about networked musical instruments, 
because it was something that you needed every person to play and every person contributed 
a little bit of information to the system. It really could only work from this kind of organic 
kind of complex integrated way where everybody is pushing a little bit to make this thing 
happen. Obviously Downstream is a solo piece, but still that piece informs the way that I 
think about musical systems. And it comes out here, I feel like, in kind of iterating on this 
isorhythmic approach by taking these repeating patterns and using them in this way that 
feels very specific to the Game Boy to the medium that it’s running on. I hope that made 
sense. [00:22:53] 

As far as other stuff that informed it - with Downstream in particular, I’m just trying 
to think of what I was into back then. For my work as a whole, obviously there is a big 
history of networked ensemble music that I was very into. And that informed my work. I 
think I also had like, kicking around in my head, a lot of process art in general. The semester 
before I had taken this electronic art class with James Fei at Mills and we watched a lot of 
process fUms. I wish I could remember them more because I really liked them. But I’m a 
bad artist and I can’t remember my inspirations. We were watching a lot of different process 
films or documentation of art that - I’m looking up one of them. This was not my favorite 



175 


of them, but there was this video that was someone getting off of a bicycle over the span of 
like, 30 minutes or something, ft was intense. I can’t remember the name of it. I’m going to 
try to look this up for you. If it rings any bells, let me know. 33 That was the kind of stuff 
that I was around when I was making Downstream. A lot of that rubbed off. [00:26:49] 

What were technical aesthetic paths that led me to this medium? Well, I started off 
as a musician, I’m still a musician, I just do it in a weird way now. Basically I was a 
musician, I was - I’m from Louisiana, I was learning a lot of jazz. I got very interested 
in electronic music and wanted to find - I got very much into live electronic music as well 
and live improvisational electronic music. I wanted to be able to improvise the kinds of 
things that I heard in studio recordings of stuff, which of course led me to get really heavily 
into making musical instruments in MaxMSP and Pure Data. So that got me very into 
that. Also very early on in my musical - electronic music - education, I guess, saw this big 
connection between the interaction that you would have with a virtual instrument and the 
kinds of interaction that you would find in video games. So I was using video games very 
early on as inspiration for my interaction design for my instruments. And as I continued 
making them, I began incorporating more elements. For me it was very important to have 
some kind of visual that imparted some understanding of the system to the audience in an 
intuitive way. And it turns out when you make a musical instrument that has this intuitive 
narrative visualization and you package it in a way that’s easily distributable for people to 
pick up and play, it kind of looks like a video game. So I just started calling my things 
games. I just started calling my work “games”. And then with the Game Boy, I feel like 
it seemed like a natural extension, I guess, because the Game Boy is a videogame console 
that has grown this double life as an instrument in its own right. So me it seemed like a 
natural medium to gravitate towards. Even before I started programming for Game Boy 
directly, I was inspired by a lot of early Nintendo sound architecture to the point where I 
would compose string trios using NES tracker software. When I would write pieces for live 
electronic performance, I would still kind of use the same general timbre systems, I guess. I 
would always have two pulse tones, maybe a triangle wave, white noise, not much else. That 
was kind of already my sonic pallet. So it wasn’t that much of a stretch when I started. It 
wasn’t that much of a change when I started just programming for Game Boys. [00:33:11] 

33. Duplantis later added that this video is a piece called “The Graphic Method: Bicycle” by Dick Raaij- 
makers, which is viewable on youtube. 


176 


Which skills did I have to develop for this particular project? I realize I skipped some 
questions back there, but I will make sure that I go back to them. I think the biggest thing 
for this one was really just getting a better understanding of how the Game Boy functioned 
and just getting a better understanding of how information is stored in memory. That 
wasn’t something I'd been thinking about at all when I started making this one. It turns out 
when the structure of your piece is defined by the layout of your program’s memory, once 
it compiles, you need to have at least an intuitive understanding of how it goes together 
in order to make sense of it. This loops into, in the code, where the hard code of values 
determined experimentally or programmatically, it is absolutely a - or for the most part, it 
is very much experimentally. So for example that 1496, which is the end of the program, 
it kind of literally is the end of the program, because once it would progress beyond that, 
it would hit this infinite loop of a string, which doesn’t happen anywhere else. I’m kind of 
assuming that was the main loop and at some point it would hit the main loop and just 
live there. And I wanted it to have an ending and the way that it trails off at that point 
feels right as an ending. Yeah, the 1496 is the spot that it ended. It was aggravating at the 
time, let me tell you. It was very aggravating because any time I changed any other values 
in memory, or anything. Any time I changed the program, everything would shuffle around 
and that number would change and I would have to find it again. There was a certain point 
where I was like, I know I can find the spot again. I’m just going to let it be for now and 
change what I need to change, and then once everything is settled down, I will try to find 
it. [00:36:27] 

I’m going to go through these different values that you gave me here and tell you more 
about them. So we’ve got, where does the 12 come from? I think I just liked - so ubyte 
bend [?] equals 12. So bend is a variable that changes the pitch bend for one of the pulse 
tones. I’m pretty sure I just picked 12 because it was a sound that I liked and it worked 
as just a baseline, like this value works for if nobody presses anything, it sounds nice. Now 
the thing about bend is that it does get re-written every time someone presses A - yes, I 
remembered that right from five years ago. So every time you press A, the variable bend 
changes and that was in part to have something that changed over a longer span of time 
than just you press a button, a thing happens, you let go, it goes back to where it was. I 
didn’t want that. I wanted some changes to stick around for a while. I always try to have a 
range of different kinds of changes. So there are changes that are immediate, there are some 



177 


that build up over time, there are some that the program is changing values and however 
you interact with it, that changes just depending on when you do it. So I try to have all of 
these different kinds of variables flying around just to give people more things to play with. 
The bend is one of those where anytime you press A, it will just change the timbre of the 
pulse tone. I think 12 was just - I liked the sound of it. Another thing to note is that some 
of the things here might not make sense initially and some of it I’m pretty sure, just from 
my memory of programming it, was because I would have a value - I think this plus two 
for the bend, for example, I don’t know why that plus two is there but I know that there 
were sometimes I would add things to the program and it would cause interesting things to 
happen in the output, like in the stream of gibberish. And when I try to take those out, 
those interesting things that came up in the stream, those would disappear. So there are 
some aspects of this program that probably don’t make a lot of sense on the surface, because 
they are - for whatever reason, it’s changing the structure of memory or adding something 
to memory that’s very interesting to look at. So I think that might be an example of it. 
[00:41:26] 

Going through the list of values that you wanted to know about - 20. This is a fun 
one. So the 20 there - again, these are all experimental, but I’m going to ramble about 
them anyway, because then you get to know more about this piece that you are writing your 
dissertation on, partially. That’s really cool. So the 20 is connected to a system of adding 
white space to the print out of downstream and what that does functionally is it changes - 
very minutely changes the durations of each pitch. I think the move equals 20 was really just 
I only wanted it to add at most one full line of white space. And as I’m looking at it, I’m 
realizing I made some mistakes when I was programming it. I’m looking at it and realizing 
there is absolutely a way to mess with the value so you end up with a lot more white space 
than I intended, which makes sense now. I have seen some people mess around with that 
and just add an entire screen full of white space and I’ve always wondered how they did that 
and now I see the error of my ways in my own code. So I guess this is partially to talk about 
the Game Boy’s structure. The Game Boy’s screen is divided up into different tiles and they 
go basically - going horizontally, there is 20 tiles on the screen. Each of them is eight pixels 
across. So the screen is 160 pixels across, 144 tall. So X is 160, Y is 144. If each of them is 
eight pixels across, then 160 divided by eight is 20. So this was an attempt for me to just add 
one - at maximum, someone would just be able to add one whole line of white space. But I 



178 


really messed up these functions, which means someone can break out of that too and start 
adding even more white space then was intended. That’s really neat. I’ve always wondered 
why that happened and I just never looked back at this program. [00:45:58] 

856 - I remember this one. In fact there is even a comment on it, at least in my version 
of the code here. This skips a long, boring section. There’s that spot in the middle of the 
piece, I think earlier in this voice note. I was talking about how I think that might be one of 
the interaction functions, but I’m not sure and it hits this very slow section where nothing 
much changes on screen, it’s just, bum, bum, bum, bum, bum, [et cetera.] .... you know 
how it sounds. It hits this very long section. It hits that spot at 8:56 and if I didn’t take 
out this section, it would be like two or three minutes long because it’s just such a large 
chunk of memory. So when it hits 856, it skips ahead by 64 in order to shorten that section. 
So it goes from three minutes or something to maybe 45 seconds, I don’t know. I haven’t 
timed it, but I was just kind of listening with my ear. The reason that it is 64 is because 
the pitch wheel is always defined by this value of zero to 255, because it’s an 8 bit register of 
memory that’s determining the sounds that come out of it. If I had put in 60 or something 
that was not divisible by 8, the rhythm would go off. [verbally describes rhythm 00:49:25] I 
can’t do it. It would stutter. There we go, you didn’t need to listen to me try to vocalize 
this piece. It would stutter. So it had to be divisible by 8 in order for it not to stutter. 
And also - I’m going to send you this documentation too, but if you look at the register for 
NR-14, you’ll see the first three bits have the frequency data, and the next three bits don’t 
really have anything going on. And then the last two do. The way that ends up working is 
you have [sings the rhythm again] That’s doing its thing. It will do that 8 times and then 
once it hits D-6 and D-7, when those values change, it changes the sounds that’s coming 
out, which kind of divides it up into these groups of 64. You’ll have 64, which is 8 measures 
of the [sings rhythm], doing its thing and then you’ll have another 8 measures of - [sings], 
another 8 measures [sings] and then another 8 measures of it. At some point it turns off. 
Eight measures of rest. So the 64 outside of just being nice and divisible by 8 and scoots 
the program up to a nice calm spot, it also helps to not break up those 8 measures either, 
so you have the 8 measure chunk, 8 measure chunk, 8 measure chunk, and then once it hits 
there it skips to a new 8 measure chunk. So that doesn’t stutter either. [00:52:13] 

I hope all this information is useful. So all of those values were a means to an end. I’m 
not sitting there being like, ha ha, I’m going to put in a 64 here. It’s very much trying to 



179 


listen to what the program needs and responding to it. [00:52:44] 

If I have saved draft versions of the code, would you be willing to share these with 
me for comparison? I unfortunately do not, I’m really sorry. This was before I understood 
version control. So I didn’t save any of those. Honestly, for me, it didn’t feel stable until 
it was done. I also programmed it very quickly. I programmed it while I was going to 
SEAMUS [Society for Electro Acoustic Music]. I went to the SEAMUS Conference 2015 at 
Virginia Tech and I decided to make a whole trip out of it and I went to my brother’s place 
in Richmond beforehand. I’m pulling up a calendar here. So I flew out there on a Sunday 
and that’s when I started working on the weird accordion app that turned into Downstream. 
I forget if I mentioned this already in this big long ramble, but it started off with me trying 
to make an accordion, but I didn’t know what the values were so I didn’t know what values 
became what frequencies. So I made this weird little program to just go off of them and 
it didn’t work out. I told you this already. It was Sunday when that happened, when I 
was flying out there, and the conference started that Thursday. And by Thursday, I had 
had basically a finished game. That was over the span of like, three days that I made this 
program and it’s basically unchanged since then. I think on July 31st, 2015, I went back 
and added some comments, but outside of that, yeah, it’s basically unchanged. So a lot of 
it was very - it came together very quickly, it was very - just like an intuitive, like, got to 
throw this all together. I unfortunately don’t have anything else to share. [00:56:55] 

I also unfortunately can’t work on it anymore. I can’t add to it because changing the 
program also changes the content of the piece. It makes it very hard to do any kind of 
edit to it whatsoever. And not just that, but also the way that it compiled. Part of that 
was specifically the way that it compiled on my old laptop, because my current laptop does 
not compile the same way. I have the same software, I’ve tried compiling it the same way, 
but for whatever reason, on my computer, my current laptop on Windows 8, it compiles it 
differently than it did when it was my old laptop on Windows 7. And it just doesn’t work 
the same. So I can’t change it right now. Downstream is just what it is. That code is just 
where it is. [00:58:35] 

I think I missed one of these questions. No, I covered that: What ideas did I begin with 
and what developed experimentally? Everything developed experimentally. I was just trying 
to make a fun little accordion instrument and I hit upon this strange system and decided, 
no, I’m going to play around with this instead. It was all experimental. I think if there was 



180 


anything that I brought to it beforehand that wasn’t experimental, it was just these general 
ideas of how the interaction would work. Like I said, I always try to have an immediate 
response and a response that changes some aspect of it - like, an immediate response, a 
response that sticks around. I try to have a few of each of those kind of interactions. 
[00:59:55] 

What was most important to me in the making process? I forget if I explicitly said 
this after - when I was talking about my initial programming of the piece. But the thing 
that was important to me was once I experienced this bug, which felt like just this rush of 
a hurricane or river rapids, I mean, that’s what I ended up - it just felt like a rush of water 
just enveloping me. And I wanted to make a piece that captured that experience of just 
being caught in the rapids and not really having a way out. And that kind of ties into these 
larger ideas that I was working with that aren’t really connected to the instrument and how 
it functions. You can hear this in other talks that I’ve done. [01:01:56] 

Just to summarize, I was in a very bad place when I was making Downstream. I was 
still working out my identity as a trans woman and was really starting to come to terms with 
the fact that I was queer. And as this was happening, there was a lot of tragedy happening 
around my own community. A lot of - even if I didn’t know it at the time that they were 
queer - a lot of queer friends of mine were also struggling. They were going through a lot 
of things. There were some folks who ended up hospitalized and some who were trying to 
kill themselves and some folks who succeeded in killing themselves. So there was a lot of 
this happening over the span of about two or three months. This was also right after the 
[redacted name] incident. That was in December 2014, I want to say. So as I was figuring 
out myself, there was a lot of tragedy up in the air around just queer folks in general, but 
also in my community. I was seeing this pattern where something bad would happen to a 
queer person and the people who didn’t know or who didn’t want to accept it, would address 
the tragedy, but also frame it as though we can never know. That was the phrase that I 
remember a lot. We can never know why this happened. Who can say? To me and to other 
queer folks around me, it was very clear that we were catching a glimpse of the weight of 
homophobia and transphobia and queerphobia on our community and on these people who 
were hurting. This large, unspoken weight that was crushing everyone. I was just kind of 
trying to wrap my head around the hopelessness of glimpsing the giant specter of, we may 
never know. And whenever this bug happened, I was just like, that’s it, that’s the feeling 



181 


that I have right now. So I tried to capture it and release it as catharsis. [01:06:43] 

So all the technical aspects of it - were very much secondary to me trying to release 
this primal scream onto the world. I think those are all the questions. Great note to end 
that on. I think that’s all the questions that you sent. 



APPENDIX C 

Patch for Pulse Music and Pulse Music Variation 


Figures C.l and C.2 detail the interface assembled in Purr Data for Pulse Music and Pulse 
Music Variation as discussed in sections 5.1 and 5.2. 

A public-facing discussion including the full code, documentation and recordings are 
available here. 


182 


183 



Figure C.l: The Purr Data patch used to perform Pulse Music and Pulse Music 

Variation at the Arete Gallery in March 2019. 










































































184 



OUtlet- 


Figure C.2: The subpatch for oscillator 7. Each of the 12 oscillator subpatches 

contain a similar arrangement, with modified routing. 















APPENDIX D 
Patch for Pygmy Gamelan 


Figure D.l details the interface assembled in Purr Data for Pygmy Gamelan as discussed in 
section 5.3. 

A public-facing discussion including the full code, documentation and recordings are 
available here. 



this pure data iteration by Ezra J. Teboul, 5/2019 / 
cc-by-nc-sa license /redthunderaudio.com / see notes for 
more detail 


pd notes for the piece 


print 


Figure D.l: The Pygmy Gamelan Purr Data patch as discussed in section 5.3. 


185 


























1 

2 

3 

4 

5 

6 

8 

9 

10 

11 

12 

13 

14 

15 

16 

17 

18 

19 

20 

21 

22 

23 

24 

25 

26 

27 

28 

29 

30 

31 


APPENDIX E 


Code for Tammy Duplantis’ 

#include <gb/gb.h> //This is the Game Boy 

#include <gb/drawing.h> 

#include <stdio.h> 


unsig: 

ned int 

Pi 

UBYTE 

pit ch2 

= 

UBYTE 

bend = 

12 

UBYTE 

timer = 

0 

UBYTE 

move = 

i; 

UBYTE 

i ; 



void sound_init (void) //This initializes the sound chip. 

{ 

NR52_REG = OxFFU; 

NR51_REG = OxOOU; 

NR50_REG = 0x77U; 

> 

void sound.A( void) //This is a pulse tone. 

{ 

NR10_REG = bend; 
after being called. 

NR11_REG = 0x80U; 

NR12_REG = OxFOU; 

NR13_REG = pitch2; 
lower 8 bits. 

NR14_REG = pitchl; 
highest three of the 
NR51_REG I= 0x11; 

> 

void sound_B( void) //This is white noise. 

{ 

NR41_REG = OxlFU; 


//This value determines the change in frequency 


//Frequency is an 11 bit value; these are the 


//The lowest three bits of this value are the 
frequency. 


Downstream 

Development Kit. 


186 



187 


32 NR42_REG = 0xF2U ; 

33 NR43_REG = pitchl; 

34 NR44_REG = OxCOU ; 

35 NR51 _REG |= 0x88; 

36 } 

37 

38 void sound_C() //This is a pulse tone. 

39 { 

40 NR2 1 _REG = 0x80U ; 

41 NR22_REG = 0x73U; 

42 NR23_REG = pitchl; 

43 NR24_REG = pitchl+3; 

44 NR51 _REG |= 0x22; 

45 } 

46 

47 void dpad () 

48 { 

49 if (joypadO & J_UP) { //Adds to # of white spaces, 

so move++; 

51 if (move > 20) { 

52 move = 0; 

53 1 

54 } 

55 if (joypadO & J_D0WN) { //Subtracts from # of white spaces. 

56 move - - ; 

57 if (move < 1) { 

58 move = 20; 

59 1 

60 1 

61 if (joypadO & J_RIGHT) { //Adds a block of white spaces to the pattern. 

62 for (i = 0; i < move; i+ + ) { 

63 printf ( " " ) ; 

64 1 

65 } 

66 if (joypadO & J_LEFT) { //Adds a block of white spaces to the pattern. 

67 for (i = 0; i < 20-move; i++) { 

68 printf ( " " ) ; 

69 1 


70 1 


71 

72 

73 

74 

75 

76 

77 

78 

79 

80 

81 

82 

83 

84 

85 

86 

87 

88 

89 

90 

91 

92 

93 

94 

95 

96 

97 

98 

99 

100 

101 

102 

103 

104 

105 


188 


} 

void mainO //This is the main loop of the program. 

{ 

sound.init(); //Initialize the sound. 

sound.B(); 

printf("You are floating\ndownstream.\n\n" ); 

waitpadup(); 

while (!joypad()) { 

} 

sound.B(); 

printf("The waters rush\naround you.\n\n") ; 

waitpadup(); 

while (!joypad()) { 

} 

sound.B(); 

printf("You are lost.\n"); 

waitpadup(); 

while (!joypad()) { 

} 

while (pitchl <= 1496) { //The loop! This number changes when the 

program stops printing. 

if (joypadO & J_A) { //This stops the main stream and replaces it 
with other values. 
bend = pitchl+2; 
sound_B(); 
dpad(); 

printf("7 0 d °/ 0 c ", bend , bend) ; 

1 else { 

sound_B ( ) ; 
sound.A ( ) ; 
sound.C ( ) ; 
dpad ( ) ; 

printf(pitchl); //This is the cause of most of the characters 

seen onscreen. 

printf ("7,c" , bend ) ; 

if (joypadO & J_B) { //This causes the program to "skip" on one 
printout. 

printf( "\n" ); 



106 

107 

108 

109 

110 

111 

112 

113 

114 

115 

116 

117 

118 

119 

120 

121 

122 

123 

124 

125 


189 


1 else { 
pit chi++; 
timer ++; 

if (pitchl == 856) { //This skips a long boring section, 

pitchl += 64; 

} 

} 

if (timer == 7) { 
pitch2 += 8; 
timer = 0; 

} 

} 

} 

NR52_REG = 0xF8; //Mutes most things at the end. 

for (i = 0; i < 29; i++) { //This clears the screen and plays white 

noise . 
sound_B ( ) ; 
printf( "\n " ) ; 

} 

NR52_REG = 0x00; //This mutes everything. 


Listing E.l: Downstream Code by Tammy Duplantis (2015) 


APPENDIX F 

Patch for Stepper Choir , Music of the Spheres and Multichannel 

Motor Music 

Figures F.l and F.2 detail the interface assembled in Max MSP for Stepper Choir , Music of 
the Spheres and Multichannel Motor Music as detailed in section 5.6. 

A public-facing discussion including the full code, documentation and recordings are 
available here. A copy of the complete and latest Cff code, written by Laurent Herrier with 
minor modification by the author, is available here. 


190 


191 


receive start/stop from 
printer computer 


I 




enter boundary values for 
z here 




hit read to load new 
coordinate files 

y axis 

I A ■ 

IXH 


enter boundary values for y here 
enter boundary values for x here 




BS 

“ 




spat5.osc.prepend /track/l/xyz 


udpserd 10.200.6.203 4002 


send to panoramix 


master stop + reset 



width of sound (default 0.25) 

H min amp for all channels (default 0.) 


curve factor for panning waveshape 
(0.5 = sqrt, 1. = linear. 2. = squared, etc.) 


args: width (default 0.25), min amp (default 0.). 
curve factor (default 0.5) 


I ipHI fpBIMI fpHp WHm ffilip iiMp (pUp 

w-,; - -,w -'---,'jj- -y\y' 


/ .-''.''.-'/.-'.'.-'.--.-''.-''--'I 





Figure F.l: The MaxMSP top level patch for Stepper Choir , Music of the 

Spheres and Multichannel Motor Music. 






























192 


r start 


start 


r stop+reset 


stop and reset 


■ 

1 ■ 



line $1 ] 


query 


IJ 


oil 1 ! :• 


81.4 0.00904 


<double click on this and copy-paste the 
long list in the pop up window 


TTTllWlI 


l.llliUtl 



► 0 . 

► 0 . 

1 


one coordinate 


Figure F.2: The MaxMSP subpatch which parses the spatialization for the x 
coordinate info as processed by the program. The subpatches 
for the y and z axes are comparable. 


















1 

2 

3 

4 

5 

6 

7 

8 

9 

10 

11 

12 

13 

14 

15 

16 

17 

18 

19 

20 


APPENDIX G 

Code for matrix inversion and results of LTSpice simulations 


This is the python code written by Dr. Kurt Werner, based on an original script by Werner 
and with extensions by the author. It implements a number of things: first, it automates the 
matrix inversion discussed in section 5.3.5. Second, it lists the resistor and capacitor values 
which would have most likely been available to Paul De Marinis in the time period and place 
where he was building the Pygmy Gamelan (based on the International Electrotechnical 
Commission’s 1963 standard E-series values as detailed in publication 600063:1963). It then 
uses these values, along with the mathematical model from section 5.3.5 to find the 5 pairs 
of standard values with resonant frequency peaks closest to those measured in the recording 
of Forest Booties, with the value x equal to 11 because the 12 notated in the schematic 
resulted, in most simulations, in unstable filters. See section 5.3.5 for the detailed discussion 
and resulting plot (figure 5.8). 

# !/usr/bin/env python 

# coding: utf-8 

# In [21] : 


import sympy as sym 
import numpy as np 


impc 

>rt 

matplotlib 

•pypiot 

as pit 




sym . 

, in 

it_printing 

() 





pit . 

, f i 

gure(figsiz 

e = (10,1 

0)) 




sR , 

sx 

, sC, ss = 

sym.symbols( ’R 

x C s’) 



mat rix 

= sym.Matr 

ix ( [ 






a 

/sR+ss*sC , 


-ss*sC , 

0, 

-1/sR , 

0] 


e 

-ss*sC , 

sx/sR+3 

* ss * sC , 

-ss * sC , 

-ss * sC , 

0] 


[ 

0, 


-ss*sC , 

1/sR+ss* sC , 

-1/sR , 

+ 1] 


[ 

-1/sR , 


-ss*sC , 

-1/sR , 

2/sR+ss*sC, 

0] 


[ 

+ 1 , 


0, 

0, 

0, 

0] 


193 


194 


21 ]) 

22 

23 Thelnverse = matrix. inv() 


25 TFsymbolic = sym.simplify(Thelnverse [2,0]) 

26 

27 print (TF) 

2.3 

29 #R = 1000* 1000 ; 

30 #C = 0.001*10** ( -6) 

31 X = 12 


32 


33 listr = np . array 

([1.0, 10, 100, 

1000 , 

10000 , 

100000, 1000000 

, 1.1, 11, 

110, 1100, 11000, 110000, 1100000, 

1.2, 12 

, 120, 1200, 

12000, 120000, 

1200000, 1.3, 

13, 130, 1300, 

13000 : 

, 130000 

, 1300000, 1. 

5, 

15, 150, 

1500, 15000, 

150000, 1500000, 

1.6, 

16, 160 

, 1600, 16000 

> 

160000, 

1600000, 1.8, 

18, 180, 1800, 

18000 : 

, 180000 

, 1800000, 2. 

0, 

20, 200, 

2000, 20000 , 

200000, 2000000, 

2.2, 

22, 220 

, 2200, 22000 

> 

220000 , 

2200000, 2.4, 

24, 240, 2400, 

24000 : 

, 240000 

, 2400000, 2. 

7, 

27, 270, 

2700, 27000 , 

270000, 2700000, 

3.0, 

30, 300 

, 3000, 30000 

J 

300000 , 

3000000, 3.3, 

33, 330, 3300, 

33000 : 

, 330000 

, 3300000, 3. 

6, 

36, 360, 

3600, 36000 , 

360000, 3600000, 

3.9, 

39, 390 

, 3900 , 39000 

J 

390000 , 

3900000, 4.3, 

43, 430, 4300, 

43000 : 

, 430000 

, 4300000, 4. 

7, 

47, 470, 

4700, 47000 , 

470000, 4700000, 

5.1 , 

51, 510 

, 5100, 51000 

> 

510000 , 

5100000, 5.6, 

56, 560, 5600, 

56000 : 

, 560000 

, 5600000, 6. 

2, 

62, 620, 

6200, 62000 , 

620000, 6200000, 

6.8, 

68, 680 

, 6800, 68000 

> 

680000 , 

6800000, 7.5, 

75, 750, 7500, 

75000 : 

, 750000 

, 7500000, 8. 

2, 

82, 820, 

8200, 82000 , 

820000, 8200000, 

9.1 , 

91, 910 

, 9100, 91000 

J 

910000 , 

9100000, 10000000, 11000000, 

1200000, 13000000, 15000000, 

16000000 , 


18000000, 20000000, 22000000]) 


34 listc = np . array 
100*10**-9 , 1 

12*10**-12 , 1 
1.2*10**-6 , 1 
150*10**-9 , 1 

18*10**-12 , 1 
1.8*10**-6 , 2 


([10*10**(-12), 100 
,0*10**-6, 10*10**- 
20*10**-12, 1.2*10* 
5*10**-12 , 150*10** 
,5*10**-6, 15*10**- 

80*10**- 12 , 1.8*10* 
,2*10**-12 , 22*10** 


*10 = 

**-12, 1.0 

*10 = 

** -! 

6, 

100*10**-6 

, 1 

. 0* 

* -9 

, 12*10**- 

9, 

120 

-12 

, 1.5*10** 

-9, 

15 

6, 

150*10**-6 

, 1 

. 5* 

* -9 

, 18*10**- 

9, 

180 

-12 

, 220*10** 

-12 

, 2 


9, 10*10**-9 , 

10**-3 , 10*10**-3 , 

* 10**-9 , 

*10**-9 , 

10**-3 , 15*10**-3 , 

*10**-9 , 

.2*10**-9 , 


22*10**-9 , 220*10**-9 , 2.2*10**-6, 22*10**-6, 220*10**-6, 2.2*10**-3, 
22*10**-3 , 27*10**-12 , 270*10**-12, 2.7*10**-9, 27*10**-9, 270*10**-9, 


195 


2.7*10**-6, 3.3*10**-12, 33*10**-12, 330*10**-12, 3.3*10** 

33*10**-9, 330*10**-9, 3.3*10**-6, 33*10**-6, 330*10**-6, 
33*10**-3, 39*10**-12, 390*10**-12, 3.9*10**-9, 39*10**-9, 
3.9*10**-6, 4.7*10**-12, 47*10**-12, 470*10**-12, 4.7*10** 

47*10**-9, 470*10**-9, 4.7*10**-6, 47*10**-6, 470*10**-6, 
47*10**-3, 56*10**-12, 560*10**-12, 5.6*10**-9, 56*10**-9, 
5.6*10**-6, 6.8*10**-12, 68*10**-12, 680*10**-12, 6.8*10** 

68*10**-9, 680*10**-9, 6.8*10**-6, 68*10**-6, 680*10**-6, 
68*10**-3, 82*10**-12, 820*10**-12, 8.2*10**-9, 82*10**-9, 
8 . 2 * 10 **- 6 ]) 

35 

36 listr = 1 istr [( 1 istr > 10000) ] 

37 listr = 1istr[(1istr< 1000000)] 

38 

39 listc = 1 ist c [ ( 1 ist c >0.001 * 10* * (-6) ) ] 

40 listc = 1 ist c [ ( 1 ist c <0 . 1 * 10* * (-6 ) ) ] 

41 

42# listr = [1.0, 10, 100, 1000, 10000, 100000, 1000000] 

43# listc = [10*10**(-12) , 100*10**-12, 1.0*10**-9, 10*10**-9, 

1.0*10**-6, 10*10**-6] 

44 

45 listrf = [406,833,946,1166,1465] 

46 clen = len(listc) 

47 rlen = len(listr) 

48 peak list = [] 

49 

50 print ( clen) 

51 print ( rlen) 

52 

53 freqPoints = 2000 

54 

55 fErrors = [1000000, 1000000, 1000000, 1000000, 1000000] 

56 jmins = [0,0,0,0,0] 

57 imins = [0,0,0,0,0] 

58 

59 for rf in listrf: 

60 plt.semilogx([rf,rf] ,[-50,20]) 

61 


-9, 

3.3*10**-3 , 
390*10**-9 , 
-9, 

4.7*10**-3 , 
560*10**-9 , 
-9, 

6.8*10**-3, 
820*10**-9 , 


100*10**-9 , 


62 i =0 


63 

64 

65 

66 

67 

68 

69 

70 

71 

72 

73 

74 

75 

76 

77 

78 

79 

80 

81 

82 

83 

84 

85 

86 

87 

88 

89 

90 

91 

92 

93 

94 

95 

96 

97 

98 

99 


196 


while (i<rlen): 
print (i) 

R = listr [i] 

j = o 

while (j < clen) : 

C = listc[j] 

TF = TFsymbolic.subs( [ (sR, R) , (sC, C), (sx, x)]) 
ws = 

np.logspace(np.loglO (20) ,np.loglO(20000) ,freqPoints)*2.0*np.pi 
H = np.zeros([freqPoints , 1] , dtype = complex ) 
ind = 0 
for w in ws : 

H[ind] = TF.subs([(ss, w*lj)])/10000000 
ind = ind+1 

listl = 20*np.loglO(np.absolute(H) ) 
amplitudemaxindex=np.argmax(listl) 
frequencyofpeak=ws[amplitudemaxindex]/2.0/np.pi 

for whichPeak in range (5): 

fError = np.absolute(frequencyofpeak-1istrf [whichPeak]) 
if fError < fErrors[whichPeak]: 
fErrors[whichPeak] = fError 
imins[whichPeak] = i 
jmins[whichPeak] = j 

# peaklist.append(frequencyofpeak) 

# pit .semilogx(ws, listl, 1inestyle = ’-’ , color=’r’ , marker=’ 

linewidth=l, antialiased=’True’) 

# pit.xlim(20, 20000) 

# pit.draw ( ) 

j = j+1 

i = i + 1 

print (imins) 
print (jmins) 


197 


100 # In [30] : 

101 

102 

103 # Print out list of R C pairs 

nil # Plot those against the ideal frequencies 

105 

106 sym . init_pr int ing ( ) 

lor pit . f igure ( f igsi ze = ( 10,10) ) 

10.3 

109 for whichPeak in range (5) : 

no print (["R: ", listr[imins[whichPeak]], C: ", 

listc[jmins[whichPeak]]]) 

in 

11 2 for rf in listrf: 

113 plt.semilogx([rf,rf] ,[-50,20]) 

114 

115 for whichPeak in range (5) : 

116 R = listr [imins [whichPeak] ] 

117 C = listc [jmins [whichPeak] ] 

ns TF = TFsymbolic.subs([(sR, R), (sC, C), (sx, x)]) 

119 H = np.zeros([freqPoints,l], dtype = complex) 

120 ind = 0 

121 f or w in ws : 

122 H[ind] = TF . subs ( [ ( ss , w* 1 j )])/ 10000000 

123 ind = ind + 1 

124 

125 listl = 20*np . loglO (np . absolute (H) ) 

126 pit .semilogx(ws/2.0/np.pi, listl, linestyle = ’ - ’ , color=’r’ , marker=’ 

’, linewidth=l, antialiased= ’True’ ) 

127 

128 plt.xlim(20, 20000) 

129 pit . draw () 

130 

131 

132 # In [ ] : 

Listing G.l: Matrix inversion and transfer function graphing code. 


The figures below (labeled G.l, G.2, G.3, G.4, G.5) are the LTSpice plots illustrating the 


198 


frequency and magnitude response of the filters designed with the values reverse-engineered 
with the computer simulation detailed and described above. 





























—120° 

112dB— 















-/ 






| 










-160° 

105dB- 

98dB— 


























! 










-180° 

-200° 

91dB- 


























• . 










—220° 

84dB 






















































































































































-320° 

"CdD 






































1Hz 10Hz 100Hz IKHz lOKHz 


Figure G.l: The plot produced by LTSpice for the filter in figure 5.7 with R is 

20 kO and C is 33 nF. 



Figure G.2: The plot produced by LTSpice for the filter in figure 5.7 with R is 

4.7 kf2 and C is 68 nF. 



Figure G.3: The plot produced by LTSpice for the filter in figure 5.7 with R is 

13 kO and C is 22 nF. 
















































































































































































































199 



Figure G.4: The plot produced by LTSpice for the filter in figure 5.7 with R is 

15 kfl and C is 15 nF. 



Figure G.5: The plot produced by LTSpice for the filter in figure 5.7 with R is 

2.2 kO and C is 82 nF